Sep 13 00:00:07.333760 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 13 00:00:07.333782 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 22:36:20 -00 2025 Sep 13 00:00:07.333790 kernel: KASLR enabled Sep 13 00:00:07.333796 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 13 00:00:07.333803 kernel: printk: bootconsole [pl11] enabled Sep 13 00:00:07.333809 kernel: efi: EFI v2.7 by EDK II Sep 13 00:00:07.333816 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Sep 13 00:00:07.333822 kernel: random: crng init done Sep 13 00:00:07.333828 kernel: ACPI: Early table checksum verification disabled Sep 13 00:00:07.333834 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 13 00:00:07.333840 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:00:07.333846 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:00:07.333854 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 13 00:00:07.333860 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:00:07.333868 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:00:07.333874 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:00:07.333881 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:00:07.333889 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:00:07.333895 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:00:07.333901 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 13 00:00:07.333908 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:00:07.333914 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 13 00:00:07.333921 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Sep 13 00:00:07.333927 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Sep 13 00:00:07.333933 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Sep 13 00:00:07.333940 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Sep 13 00:00:07.333946 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Sep 13 00:00:07.333953 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Sep 13 00:00:07.333961 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Sep 13 00:00:07.333967 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Sep 13 00:00:07.333974 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Sep 13 00:00:07.333980 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Sep 13 00:00:07.333986 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Sep 13 00:00:07.333992 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Sep 13 00:00:07.333999 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Sep 13 00:00:07.334005 kernel: Zone ranges: Sep 13 00:00:07.334011 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 13 00:00:07.334018 kernel: DMA32 empty Sep 13 00:00:07.334024 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 13 00:00:07.334030 kernel: Movable zone start for each node Sep 13 00:00:07.334041 kernel: Early memory node ranges Sep 13 00:00:07.334048 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 13 00:00:07.334055 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Sep 13 00:00:07.334061 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 13 00:00:07.334068 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 13 00:00:07.334077 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 13 00:00:07.334084 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 13 00:00:07.334090 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 13 00:00:07.334097 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 13 00:00:07.334104 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 13 00:00:07.334111 kernel: psci: probing for conduit method from ACPI. Sep 13 00:00:07.334118 kernel: psci: PSCIv1.1 detected in firmware. Sep 13 00:00:07.334125 kernel: psci: Using standard PSCI v0.2 function IDs Sep 13 00:00:07.334131 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 13 00:00:07.334138 kernel: psci: SMC Calling Convention v1.4 Sep 13 00:00:07.334144 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Sep 13 00:00:07.334151 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Sep 13 00:00:07.334159 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 13 00:00:07.334166 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 13 00:00:07.334173 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 13 00:00:07.334179 kernel: Detected PIPT I-cache on CPU0 Sep 13 00:00:07.334186 kernel: CPU features: detected: GIC system register CPU interface Sep 13 00:00:07.334193 kernel: CPU features: detected: Hardware dirty bit management Sep 13 00:00:07.334200 kernel: CPU features: detected: Spectre-BHB Sep 13 00:00:07.334207 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 13 00:00:07.334213 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 13 00:00:07.334220 kernel: CPU features: detected: ARM erratum 1418040 Sep 13 00:00:07.334227 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Sep 13 00:00:07.334235 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 13 00:00:07.334242 kernel: alternatives: applying boot alternatives Sep 13 00:00:07.334250 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 13 00:00:07.334257 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:00:07.334264 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:00:07.334271 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:00:07.334278 kernel: Fallback order for Node 0: 0 Sep 13 00:00:07.334284 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Sep 13 00:00:07.334291 kernel: Policy zone: Normal Sep 13 00:00:07.334298 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:00:07.334304 kernel: software IO TLB: area num 2. Sep 13 00:00:07.334313 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Sep 13 00:00:07.334320 kernel: Memory: 3982564K/4194160K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 211596K reserved, 0K cma-reserved) Sep 13 00:00:07.334327 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:00:07.334333 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:00:07.334341 kernel: rcu: RCU event tracing is enabled. Sep 13 00:00:07.334348 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:00:07.334355 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:00:07.334362 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:00:07.334368 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:00:07.334375 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:00:07.334382 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 13 00:00:07.334390 kernel: GICv3: 960 SPIs implemented Sep 13 00:00:07.334397 kernel: GICv3: 0 Extended SPIs implemented Sep 13 00:00:07.334403 kernel: Root IRQ handler: gic_handle_irq Sep 13 00:00:07.334410 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 13 00:00:07.334417 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 13 00:00:07.334423 kernel: ITS: No ITS available, not enabling LPIs Sep 13 00:00:07.334430 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:00:07.334437 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:00:07.334444 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 13 00:00:07.334451 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 13 00:00:07.334458 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 13 00:00:07.334466 kernel: Console: colour dummy device 80x25 Sep 13 00:00:07.334473 kernel: printk: console [tty1] enabled Sep 13 00:00:07.334481 kernel: ACPI: Core revision 20230628 Sep 13 00:00:07.336522 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 13 00:00:07.336537 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:00:07.336545 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:00:07.336552 kernel: landlock: Up and running. Sep 13 00:00:07.336559 kernel: SELinux: Initializing. Sep 13 00:00:07.336567 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:00:07.336574 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:00:07.336585 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:00:07.336593 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:00:07.336600 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Sep 13 00:00:07.336607 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Sep 13 00:00:07.336614 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 13 00:00:07.336621 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:00:07.336629 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:00:07.336643 kernel: Remapping and enabling EFI services. Sep 13 00:00:07.336650 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:00:07.336657 kernel: Detected PIPT I-cache on CPU1 Sep 13 00:00:07.336665 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 13 00:00:07.336674 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:00:07.336681 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 13 00:00:07.336688 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:00:07.336696 kernel: SMP: Total of 2 processors activated. Sep 13 00:00:07.336703 kernel: CPU features: detected: 32-bit EL0 Support Sep 13 00:00:07.336712 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 13 00:00:07.336719 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 13 00:00:07.336727 kernel: CPU features: detected: CRC32 instructions Sep 13 00:00:07.336734 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 13 00:00:07.336741 kernel: CPU features: detected: LSE atomic instructions Sep 13 00:00:07.336749 kernel: CPU features: detected: Privileged Access Never Sep 13 00:00:07.336756 kernel: CPU: All CPU(s) started at EL1 Sep 13 00:00:07.336763 kernel: alternatives: applying system-wide alternatives Sep 13 00:00:07.336771 kernel: devtmpfs: initialized Sep 13 00:00:07.336780 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:00:07.336787 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:00:07.336795 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:00:07.336803 kernel: SMBIOS 3.1.0 present. Sep 13 00:00:07.336810 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 13 00:00:07.336818 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:00:07.336826 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 13 00:00:07.336833 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 13 00:00:07.336841 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 13 00:00:07.336850 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:00:07.336857 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Sep 13 00:00:07.336865 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:00:07.336872 kernel: cpuidle: using governor menu Sep 13 00:00:07.336879 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 13 00:00:07.336887 kernel: ASID allocator initialised with 32768 entries Sep 13 00:00:07.336894 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:00:07.336901 kernel: Serial: AMBA PL011 UART driver Sep 13 00:00:07.336909 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 13 00:00:07.336918 kernel: Modules: 0 pages in range for non-PLT usage Sep 13 00:00:07.336925 kernel: Modules: 508992 pages in range for PLT usage Sep 13 00:00:07.336932 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:00:07.336940 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:00:07.336947 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 13 00:00:07.336955 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 13 00:00:07.336962 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:00:07.336970 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:00:07.336977 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 13 00:00:07.336986 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 13 00:00:07.336994 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:00:07.337001 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:00:07.337008 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:00:07.337016 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:00:07.337023 kernel: ACPI: Interpreter enabled Sep 13 00:00:07.337030 kernel: ACPI: Using GIC for interrupt routing Sep 13 00:00:07.337038 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 13 00:00:07.337045 kernel: printk: console [ttyAMA0] enabled Sep 13 00:00:07.337054 kernel: printk: bootconsole [pl11] disabled Sep 13 00:00:07.337061 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 13 00:00:07.337069 kernel: iommu: Default domain type: Translated Sep 13 00:00:07.337076 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 13 00:00:07.337084 kernel: efivars: Registered efivars operations Sep 13 00:00:07.337091 kernel: vgaarb: loaded Sep 13 00:00:07.337098 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 13 00:00:07.337106 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:00:07.337113 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:00:07.337122 kernel: pnp: PnP ACPI init Sep 13 00:00:07.337130 kernel: pnp: PnP ACPI: found 0 devices Sep 13 00:00:07.337137 kernel: NET: Registered PF_INET protocol family Sep 13 00:00:07.337145 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:00:07.337152 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:00:07.337160 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:00:07.337167 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:00:07.337175 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 13 00:00:07.337182 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:00:07.337191 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:00:07.337198 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:00:07.337206 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:00:07.337213 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:00:07.337220 kernel: kvm [1]: HYP mode not available Sep 13 00:00:07.337228 kernel: Initialise system trusted keyrings Sep 13 00:00:07.337235 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:00:07.337242 kernel: Key type asymmetric registered Sep 13 00:00:07.337250 kernel: Asymmetric key parser 'x509' registered Sep 13 00:00:07.337258 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 13 00:00:07.337266 kernel: io scheduler mq-deadline registered Sep 13 00:00:07.337273 kernel: io scheduler kyber registered Sep 13 00:00:07.337281 kernel: io scheduler bfq registered Sep 13 00:00:07.337288 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:00:07.337295 kernel: thunder_xcv, ver 1.0 Sep 13 00:00:07.337303 kernel: thunder_bgx, ver 1.0 Sep 13 00:00:07.337310 kernel: nicpf, ver 1.0 Sep 13 00:00:07.337317 kernel: nicvf, ver 1.0 Sep 13 00:00:07.337452 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 13 00:00:07.337540 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-13T00:00:06 UTC (1757721606) Sep 13 00:00:07.337552 kernel: efifb: probing for efifb Sep 13 00:00:07.337560 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 13 00:00:07.337567 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 13 00:00:07.337575 kernel: efifb: scrolling: redraw Sep 13 00:00:07.337582 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 13 00:00:07.337590 kernel: Console: switching to colour frame buffer device 128x48 Sep 13 00:00:07.337599 kernel: fb0: EFI VGA frame buffer device Sep 13 00:00:07.337606 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 13 00:00:07.337614 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 00:00:07.337621 kernel: No ACPI PMU IRQ for CPU0 Sep 13 00:00:07.337628 kernel: No ACPI PMU IRQ for CPU1 Sep 13 00:00:07.337636 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Sep 13 00:00:07.337643 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 13 00:00:07.337650 kernel: watchdog: Hard watchdog permanently disabled Sep 13 00:00:07.337658 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:00:07.337667 kernel: Segment Routing with IPv6 Sep 13 00:00:07.337674 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:00:07.337681 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:00:07.337689 kernel: Key type dns_resolver registered Sep 13 00:00:07.337696 kernel: registered taskstats version 1 Sep 13 00:00:07.337703 kernel: Loading compiled-in X.509 certificates Sep 13 00:00:07.337711 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 036ad4721a31543be5c000f2896b40d1e5515c6e' Sep 13 00:00:07.337718 kernel: Key type .fscrypt registered Sep 13 00:00:07.337725 kernel: Key type fscrypt-provisioning registered Sep 13 00:00:07.337734 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:00:07.337741 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:00:07.337749 kernel: ima: No architecture policies found Sep 13 00:00:07.337756 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 13 00:00:07.337763 kernel: clk: Disabling unused clocks Sep 13 00:00:07.337771 kernel: Freeing unused kernel memory: 39488K Sep 13 00:00:07.337778 kernel: Run /init as init process Sep 13 00:00:07.337785 kernel: with arguments: Sep 13 00:00:07.337792 kernel: /init Sep 13 00:00:07.337801 kernel: with environment: Sep 13 00:00:07.337808 kernel: HOME=/ Sep 13 00:00:07.337815 kernel: TERM=linux Sep 13 00:00:07.337822 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:00:07.337831 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:00:07.337841 systemd[1]: Detected virtualization microsoft. Sep 13 00:00:07.337849 systemd[1]: Detected architecture arm64. Sep 13 00:00:07.337857 systemd[1]: Running in initrd. Sep 13 00:00:07.337866 systemd[1]: No hostname configured, using default hostname. Sep 13 00:00:07.337874 systemd[1]: Hostname set to . Sep 13 00:00:07.337882 systemd[1]: Initializing machine ID from random generator. Sep 13 00:00:07.337890 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:00:07.337898 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:00:07.337906 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:00:07.337914 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:00:07.337923 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:00:07.337932 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:00:07.337940 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:00:07.337949 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:00:07.337958 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:00:07.337966 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:00:07.337974 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:00:07.337983 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:00:07.337991 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:00:07.337999 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:00:07.338007 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:00:07.338015 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:00:07.338023 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:00:07.338031 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:00:07.338039 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:00:07.338047 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:00:07.338056 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:00:07.338064 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:00:07.338072 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:00:07.338080 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:00:07.338088 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:00:07.338096 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:00:07.338104 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:00:07.338112 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:00:07.338120 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:00:07.338145 systemd-journald[217]: Collecting audit messages is disabled. Sep 13 00:00:07.338165 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:00:07.338174 systemd-journald[217]: Journal started Sep 13 00:00:07.338194 systemd-journald[217]: Runtime Journal (/run/log/journal/8a2a9b3b469e4fafba83a653fedf76c7) is 8.0M, max 78.5M, 70.5M free. Sep 13 00:00:07.353729 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:00:07.347990 systemd-modules-load[218]: Inserted module 'overlay' Sep 13 00:00:07.361828 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:00:07.397596 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:00:07.391232 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:00:07.414890 kernel: Bridge firewalling registered Sep 13 00:00:07.408083 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:00:07.414055 systemd-modules-load[218]: Inserted module 'br_netfilter' Sep 13 00:00:07.420196 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:00:07.431424 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:00:07.456701 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:00:07.465666 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:00:07.491603 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:00:07.502652 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:00:07.523207 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:00:07.530880 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:00:07.537669 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:00:07.554602 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:00:07.584749 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:00:07.592672 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:00:07.615871 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:00:07.634258 dracut-cmdline[251]: dracut-dracut-053 Sep 13 00:00:07.638583 systemd-resolved[253]: Positive Trust Anchors: Sep 13 00:00:07.655461 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 13 00:00:07.638594 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:00:07.638626 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:00:07.642190 systemd-resolved[253]: Defaulting to hostname 'linux'. Sep 13 00:00:07.643810 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:00:07.651735 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:00:07.663118 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:00:07.798510 kernel: SCSI subsystem initialized Sep 13 00:00:07.805503 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:00:07.815503 kernel: iscsi: registered transport (tcp) Sep 13 00:00:07.834699 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:00:07.834758 kernel: QLogic iSCSI HBA Driver Sep 13 00:00:07.867644 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:00:07.881966 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:00:07.913010 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:00:07.913038 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:00:07.919419 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:00:07.969523 kernel: raid6: neonx8 gen() 15742 MB/s Sep 13 00:00:07.989505 kernel: raid6: neonx4 gen() 15653 MB/s Sep 13 00:00:08.009498 kernel: raid6: neonx2 gen() 13237 MB/s Sep 13 00:00:08.030498 kernel: raid6: neonx1 gen() 10456 MB/s Sep 13 00:00:08.050495 kernel: raid6: int64x8 gen() 6960 MB/s Sep 13 00:00:08.070495 kernel: raid6: int64x4 gen() 7359 MB/s Sep 13 00:00:08.091496 kernel: raid6: int64x2 gen() 6133 MB/s Sep 13 00:00:08.114921 kernel: raid6: int64x1 gen() 5062 MB/s Sep 13 00:00:08.114941 kernel: raid6: using algorithm neonx8 gen() 15742 MB/s Sep 13 00:00:08.139141 kernel: raid6: .... xor() 11897 MB/s, rmw enabled Sep 13 00:00:08.139178 kernel: raid6: using neon recovery algorithm Sep 13 00:00:08.151248 kernel: xor: measuring software checksum speed Sep 13 00:00:08.151280 kernel: 8regs : 19759 MB/sec Sep 13 00:00:08.158368 kernel: 32regs : 18692 MB/sec Sep 13 00:00:08.158379 kernel: arm64_neon : 27025 MB/sec Sep 13 00:00:08.162730 kernel: xor: using function: arm64_neon (27025 MB/sec) Sep 13 00:00:08.213621 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:00:08.222690 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:00:08.239621 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:00:08.262439 systemd-udevd[438]: Using default interface naming scheme 'v255'. Sep 13 00:00:08.268985 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:00:08.289608 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:00:08.320680 dracut-pre-trigger[452]: rd.md=0: removing MD RAID activation Sep 13 00:00:08.346969 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:00:08.361766 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:00:08.400359 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:00:08.418638 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:00:08.442154 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:00:08.460041 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:00:08.475561 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:00:08.490251 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:00:08.511521 kernel: hv_vmbus: Vmbus version:5.3 Sep 13 00:00:08.512613 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:00:08.530072 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:00:08.570826 kernel: hv_vmbus: registering driver hv_netvsc Sep 13 00:00:08.570851 kernel: hv_vmbus: registering driver hid_hyperv Sep 13 00:00:08.570861 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:00:08.570871 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 13 00:00:08.570880 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:00:08.570889 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Sep 13 00:00:08.530216 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:00:08.617700 kernel: hv_vmbus: registering driver hv_storvsc Sep 13 00:00:08.617722 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 13 00:00:08.617861 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Sep 13 00:00:08.617872 kernel: scsi host0: storvsc_host_t Sep 13 00:00:08.617895 kernel: scsi host1: storvsc_host_t Sep 13 00:00:08.601805 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:00:08.643346 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 13 00:00:08.607875 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:00:08.660517 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 13 00:00:08.608157 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:00:08.638237 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:00:08.674776 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:00:08.703302 kernel: hv_netvsc 000d3ac3-530c-000d-3ac3-530c000d3ac3 eth0: VF slot 1 added Sep 13 00:00:08.703453 kernel: PTP clock support registered Sep 13 00:00:08.682318 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:00:08.728803 kernel: hv_utils: Registering HyperV Utility Driver Sep 13 00:00:08.728827 kernel: hv_vmbus: registering driver hv_utils Sep 13 00:00:08.726620 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:00:08.751648 kernel: hv_utils: Heartbeat IC version 3.0 Sep 13 00:00:08.751671 kernel: hv_utils: Shutdown IC version 3.2 Sep 13 00:00:08.726722 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:00:08.525531 kernel: hv_utils: TimeSync IC version 4.0 Sep 13 00:00:08.539038 systemd-journald[217]: Time jumped backwards, rotating. Sep 13 00:00:08.525267 systemd-resolved[253]: Clock change detected. Flushing caches. Sep 13 00:00:08.556188 kernel: hv_vmbus: registering driver hv_pci Sep 13 00:00:08.556208 kernel: hv_pci 3ec20755-cebe-4243-ac69-80ce2bff8f28: PCI VMBus probing: Using version 0x10004 Sep 13 00:00:08.525535 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:00:08.570821 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 13 00:00:08.570975 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:00:08.583686 kernel: hv_pci 3ec20755-cebe-4243-ac69-80ce2bff8f28: PCI host bridge to bus cebe:00 Sep 13 00:00:08.583862 kernel: pci_bus cebe:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 13 00:00:08.584000 kernel: pci_bus cebe:00: No busn resource found for root bus, will use [bus 00-ff] Sep 13 00:00:08.596122 kernel: pci cebe:00:02.0: [15b3:1018] type 00 class 0x020000 Sep 13 00:00:08.600069 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:00:08.631714 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 13 00:00:08.631886 kernel: pci cebe:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 13 00:00:08.631908 kernel: pci cebe:00:02.0: enabling Extended Tags Sep 13 00:00:08.631921 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 13 00:00:08.636726 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 13 00:00:08.636983 kernel: pci cebe:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at cebe:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Sep 13 00:00:08.651503 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 13 00:00:08.651674 kernel: pci_bus cebe:00: busn_res: [bus 00-ff] end is updated to 00 Sep 13 00:00:08.656851 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 13 00:00:08.656982 kernel: pci cebe:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 13 00:00:08.656421 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:00:08.687332 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 13 00:00:08.693012 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:00:08.718429 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:00:08.718451 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 13 00:00:08.756767 kernel: mlx5_core cebe:00:02.0: enabling device (0000 -> 0002) Sep 13 00:00:08.763035 kernel: mlx5_core cebe:00:02.0: firmware version: 16.30.1284 Sep 13 00:00:08.961783 kernel: hv_netvsc 000d3ac3-530c-000d-3ac3-530c000d3ac3 eth0: VF registering: eth1 Sep 13 00:00:08.961980 kernel: mlx5_core cebe:00:02.0 eth1: joined to eth0 Sep 13 00:00:08.969081 kernel: mlx5_core cebe:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Sep 13 00:00:08.979040 kernel: mlx5_core cebe:00:02.0 enP52926s1: renamed from eth1 Sep 13 00:00:09.463137 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 13 00:00:09.489218 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (484) Sep 13 00:00:09.489268 kernel: BTRFS: device fsid 29bc4da8-c689-46a2-a16a-b7bbc722db77 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (488) Sep 13 00:00:09.513242 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 13 00:00:09.527114 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 13 00:00:09.543242 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 13 00:00:09.551105 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 13 00:00:09.583217 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:00:09.611040 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:00:09.620043 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:00:10.636047 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:00:10.636632 disk-uuid[607]: The operation has completed successfully. Sep 13 00:00:10.702552 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:00:10.702641 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:00:10.735149 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:00:10.751525 sh[720]: Success Sep 13 00:00:10.784052 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 13 00:00:11.233581 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:00:11.256163 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:00:11.266358 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:00:11.306903 kernel: BTRFS info (device dm-0): first mount of filesystem 29bc4da8-c689-46a2-a16a-b7bbc722db77 Sep 13 00:00:11.306955 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:00:11.314257 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:00:11.319310 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:00:11.323909 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:00:11.844706 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:00:11.850144 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:00:11.870269 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:00:11.881832 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:00:11.916733 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:00:11.916776 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:00:11.921996 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:00:11.983509 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:00:12.005962 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:00:12.019262 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:00:12.028374 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:00:12.042602 kernel: BTRFS info (device sda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:00:12.040651 systemd-networkd[898]: lo: Link UP Sep 13 00:00:12.040654 systemd-networkd[898]: lo: Gained carrier Sep 13 00:00:12.042435 systemd-networkd[898]: Enumeration completed Sep 13 00:00:12.044454 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:00:12.046903 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:00:12.046907 systemd-networkd[898]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:00:12.057863 systemd[1]: Reached target network.target - Network. Sep 13 00:00:12.069044 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:00:12.101290 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:00:12.165810 kernel: mlx5_core cebe:00:02.0 enP52926s1: Link up Sep 13 00:00:12.166058 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 13 00:00:12.207105 kernel: hv_netvsc 000d3ac3-530c-000d-3ac3-530c000d3ac3 eth0: Data path switched to VF: enP52926s1 Sep 13 00:00:12.207320 systemd-networkd[898]: enP52926s1: Link UP Sep 13 00:00:12.207408 systemd-networkd[898]: eth0: Link UP Sep 13 00:00:12.207502 systemd-networkd[898]: eth0: Gained carrier Sep 13 00:00:12.207510 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:00:12.216207 systemd-networkd[898]: enP52926s1: Gained carrier Sep 13 00:00:12.240071 systemd-networkd[898]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 13 00:00:13.493699 ignition[909]: Ignition 2.19.0 Sep 13 00:00:13.493711 ignition[909]: Stage: fetch-offline Sep 13 00:00:13.497548 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:00:13.493746 ignition[909]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:00:13.493754 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:00:13.493844 ignition[909]: parsed url from cmdline: "" Sep 13 00:00:13.493847 ignition[909]: no config URL provided Sep 13 00:00:13.493851 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:00:13.527213 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 13 00:00:13.493858 ignition[909]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:00:13.493862 ignition[909]: failed to fetch config: resource requires networking Sep 13 00:00:13.494047 ignition[909]: Ignition finished successfully Sep 13 00:00:13.547358 ignition[917]: Ignition 2.19.0 Sep 13 00:00:13.547364 ignition[917]: Stage: fetch Sep 13 00:00:13.547564 ignition[917]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:00:13.547577 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:00:13.547685 ignition[917]: parsed url from cmdline: "" Sep 13 00:00:13.547689 ignition[917]: no config URL provided Sep 13 00:00:13.547693 ignition[917]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:00:13.547700 ignition[917]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:00:13.547743 ignition[917]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 13 00:00:13.670309 ignition[917]: GET result: OK Sep 13 00:00:13.670398 ignition[917]: config has been read from IMDS userdata Sep 13 00:00:13.670475 ignition[917]: parsing config with SHA512: 160f4f48874a72e1f5043fef45757d83e06dabb109442c460edd3fef21fab6d8c60f0bfca2f321b76570719a21032a2b9bb6d7ac1b3c515e9c3cc211bef57bf4 Sep 13 00:00:13.674094 unknown[917]: fetched base config from "system" Sep 13 00:00:13.674477 ignition[917]: fetch: fetch complete Sep 13 00:00:13.674101 unknown[917]: fetched base config from "system" Sep 13 00:00:13.674482 ignition[917]: fetch: fetch passed Sep 13 00:00:13.674106 unknown[917]: fetched user config from "azure" Sep 13 00:00:13.674521 ignition[917]: Ignition finished successfully Sep 13 00:00:13.680600 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 13 00:00:13.697253 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:00:13.718760 ignition[923]: Ignition 2.19.0 Sep 13 00:00:13.727387 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:00:13.718767 ignition[923]: Stage: kargs Sep 13 00:00:13.719056 ignition[923]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:00:13.719066 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:00:13.751180 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:00:13.720661 ignition[923]: kargs: kargs passed Sep 13 00:00:13.766718 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:00:13.720724 ignition[923]: Ignition finished successfully Sep 13 00:00:13.773512 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:00:13.763882 ignition[929]: Ignition 2.19.0 Sep 13 00:00:13.786762 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:00:13.763889 ignition[929]: Stage: disks Sep 13 00:00:13.798393 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:00:13.764107 ignition[929]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:00:13.810881 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:00:13.764116 ignition[929]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:00:13.815393 systemd-networkd[898]: eth0: Gained IPv6LL Sep 13 00:00:13.765580 ignition[929]: disks: disks passed Sep 13 00:00:13.829318 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:00:13.765633 ignition[929]: Ignition finished successfully Sep 13 00:00:13.855271 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:00:13.963791 systemd-fsck[937]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 13 00:00:13.977896 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:00:14.000197 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:00:14.063050 kernel: EXT4-fs (sda9): mounted filesystem d35fd879-6758-447b-9fdd-bb21dd7c5b2b r/w with ordered data mode. Quota mode: none. Sep 13 00:00:14.063282 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:00:14.068656 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:00:14.121103 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:00:14.149614 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (948) Sep 13 00:00:14.162894 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:00:14.162953 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:00:14.167008 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:00:14.171135 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:00:14.182104 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:00:14.186807 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 13 00:00:14.201160 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:00:14.201192 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:00:14.209292 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:00:14.224963 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:00:14.253379 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:00:14.863405 coreos-metadata[965]: Sep 13 00:00:14.863 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 13 00:00:14.871441 coreos-metadata[965]: Sep 13 00:00:14.871 INFO Fetch successful Sep 13 00:00:14.871441 coreos-metadata[965]: Sep 13 00:00:14.871 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 13 00:00:14.887819 coreos-metadata[965]: Sep 13 00:00:14.887 INFO Fetch successful Sep 13 00:00:14.893807 coreos-metadata[965]: Sep 13 00:00:14.887 INFO wrote hostname ci-4081.3.5-n-a13ccab244 to /sysroot/etc/hostname Sep 13 00:00:14.899816 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:00:15.223640 initrd-setup-root[977]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:00:15.283416 initrd-setup-root[984]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:00:15.308522 initrd-setup-root[991]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:00:15.317698 initrd-setup-root[998]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:00:16.572924 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:00:16.589200 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:00:16.596472 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:00:16.622726 kernel: BTRFS info (device sda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:00:16.622826 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:00:16.647484 ignition[1066]: INFO : Ignition 2.19.0 Sep 13 00:00:16.654247 ignition[1066]: INFO : Stage: mount Sep 13 00:00:16.654247 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:00:16.654247 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:00:16.654247 ignition[1066]: INFO : mount: mount passed Sep 13 00:00:16.654247 ignition[1066]: INFO : Ignition finished successfully Sep 13 00:00:16.658350 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:00:16.681233 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:00:16.694512 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:00:16.721298 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:00:16.748042 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1077) Sep 13 00:00:16.762654 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:00:16.762693 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:00:16.766721 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:00:16.775050 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:00:16.776193 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:00:16.805147 ignition[1094]: INFO : Ignition 2.19.0 Sep 13 00:00:16.805147 ignition[1094]: INFO : Stage: files Sep 13 00:00:16.813563 ignition[1094]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:00:16.813563 ignition[1094]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:00:16.813563 ignition[1094]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:00:16.866348 ignition[1094]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:00:16.866348 ignition[1094]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:00:16.953693 ignition[1094]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:00:16.961464 ignition[1094]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:00:16.961464 ignition[1094]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:00:16.954106 unknown[1094]: wrote ssh authorized keys file for user: core Sep 13 00:00:17.011444 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 13 00:00:17.022634 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 13 00:00:17.158952 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:00:17.620088 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 13 00:00:17.620088 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:00:17.640169 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:00:17.640169 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:00:17.640169 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:00:17.640169 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:00:17.640169 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:00:17.640169 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:00:17.640169 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:00:17.640169 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:00:17.640169 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:00:17.640169 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:00:17.640169 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:00:17.640169 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:00:17.640169 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 13 00:00:18.138514 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 00:00:18.390346 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:00:18.390346 ignition[1094]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 00:00:18.451509 ignition[1094]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:00:18.463000 ignition[1094]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:00:18.463000 ignition[1094]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 00:00:18.463000 ignition[1094]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:00:18.463000 ignition[1094]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:00:18.463000 ignition[1094]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:00:18.463000 ignition[1094]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:00:18.463000 ignition[1094]: INFO : files: files passed Sep 13 00:00:18.463000 ignition[1094]: INFO : Ignition finished successfully Sep 13 00:00:18.463586 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:00:18.504321 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:00:18.522197 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:00:18.545163 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:00:18.545254 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:00:18.587185 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:00:18.587185 initrd-setup-root-after-ignition[1123]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:00:18.605402 initrd-setup-root-after-ignition[1127]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:00:18.598059 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:00:18.612765 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:00:18.642338 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:00:18.671270 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:00:18.671453 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:00:18.683768 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:00:18.695839 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:00:18.706716 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:00:18.725269 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:00:18.747461 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:00:18.767402 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:00:18.783949 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:00:18.790771 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:00:18.804217 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:00:18.816221 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:00:18.816343 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:00:18.834526 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:00:18.840525 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:00:18.852169 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:00:18.863948 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:00:18.874758 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:00:18.886263 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:00:18.897921 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:00:18.910896 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:00:18.922110 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:00:18.934159 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:00:18.943784 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:00:18.943904 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:00:18.958985 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:00:18.965231 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:00:18.977197 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:00:18.977264 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:00:18.989657 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:00:18.989779 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:00:19.008158 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:00:19.008276 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:00:19.015232 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:00:19.015326 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:00:19.026012 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 00:00:19.026117 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:00:19.058329 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:00:19.119193 ignition[1147]: INFO : Ignition 2.19.0 Sep 13 00:00:19.119193 ignition[1147]: INFO : Stage: umount Sep 13 00:00:19.119193 ignition[1147]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:00:19.119193 ignition[1147]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:00:19.119193 ignition[1147]: INFO : umount: umount passed Sep 13 00:00:19.119193 ignition[1147]: INFO : Ignition finished successfully Sep 13 00:00:19.095230 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:00:19.108405 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:00:19.108566 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:00:19.127819 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:00:19.127927 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:00:19.144288 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:00:19.144375 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:00:19.155613 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:00:19.155831 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:00:19.167578 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:00:19.167626 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:00:19.177105 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:00:19.177144 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 13 00:00:19.186975 systemd[1]: Stopped target network.target - Network. Sep 13 00:00:19.198755 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:00:19.198813 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:00:19.210335 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:00:19.221495 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:00:19.225043 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:00:19.234644 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:00:19.246647 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:00:19.257449 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:00:19.257507 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:00:19.268248 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:00:19.268303 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:00:19.279941 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:00:19.279993 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:00:19.291955 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:00:19.291999 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:00:19.303652 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:00:19.315413 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:00:19.327243 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:00:19.327335 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:00:19.342506 systemd-networkd[898]: eth0: DHCPv6 lease lost Sep 13 00:00:19.343836 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:00:19.582247 kernel: hv_netvsc 000d3ac3-530c-000d-3ac3-530c000d3ac3 eth0: Data path switched from VF: enP52926s1 Sep 13 00:00:19.343958 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:00:19.357805 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:00:19.357966 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:00:19.371810 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:00:19.371879 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:00:19.407236 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:00:19.417518 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:00:19.417601 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:00:19.429969 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:00:19.430040 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:00:19.441219 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:00:19.441273 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:00:19.453317 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:00:19.453373 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:00:19.464971 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:00:19.480763 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:00:19.481315 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:00:19.481403 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:00:19.504426 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:00:19.504546 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:00:19.518061 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:00:19.518141 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:00:19.529231 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:00:19.529269 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:00:19.541542 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:00:19.541596 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:00:19.559667 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:00:19.559721 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:00:19.581980 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:00:19.582073 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:00:19.595973 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:00:19.596053 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:00:19.614265 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:00:19.629683 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:00:19.629758 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:00:19.644515 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 13 00:00:19.644583 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:00:19.888046 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Sep 13 00:00:19.657089 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:00:19.657143 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:00:19.669605 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:00:19.669654 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:00:19.684602 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:00:19.684705 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:00:19.697390 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:00:19.697494 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:00:19.708585 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:00:19.736252 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:00:19.752063 systemd[1]: Switching root. Sep 13 00:00:19.952204 systemd-journald[217]: Journal stopped Sep 13 00:00:07.333760 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 13 00:00:07.333782 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 22:36:20 -00 2025 Sep 13 00:00:07.333790 kernel: KASLR enabled Sep 13 00:00:07.333796 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 13 00:00:07.333803 kernel: printk: bootconsole [pl11] enabled Sep 13 00:00:07.333809 kernel: efi: EFI v2.7 by EDK II Sep 13 00:00:07.333816 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Sep 13 00:00:07.333822 kernel: random: crng init done Sep 13 00:00:07.333828 kernel: ACPI: Early table checksum verification disabled Sep 13 00:00:07.333834 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 13 00:00:07.333840 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:00:07.333846 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:00:07.333854 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 13 00:00:07.333860 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:00:07.333868 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:00:07.333874 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:00:07.333881 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:00:07.333889 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:00:07.333895 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:00:07.333901 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 13 00:00:07.333908 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 13 00:00:07.333914 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 13 00:00:07.333921 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Sep 13 00:00:07.333927 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Sep 13 00:00:07.333933 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Sep 13 00:00:07.333940 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Sep 13 00:00:07.333946 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Sep 13 00:00:07.333953 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Sep 13 00:00:07.333961 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Sep 13 00:00:07.333967 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Sep 13 00:00:07.333974 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Sep 13 00:00:07.333980 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Sep 13 00:00:07.333986 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Sep 13 00:00:07.333992 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Sep 13 00:00:07.333999 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Sep 13 00:00:07.334005 kernel: Zone ranges: Sep 13 00:00:07.334011 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 13 00:00:07.334018 kernel: DMA32 empty Sep 13 00:00:07.334024 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 13 00:00:07.334030 kernel: Movable zone start for each node Sep 13 00:00:07.334041 kernel: Early memory node ranges Sep 13 00:00:07.334048 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 13 00:00:07.334055 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Sep 13 00:00:07.334061 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 13 00:00:07.334068 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 13 00:00:07.334077 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 13 00:00:07.334084 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 13 00:00:07.334090 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 13 00:00:07.334097 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 13 00:00:07.334104 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 13 00:00:07.334111 kernel: psci: probing for conduit method from ACPI. Sep 13 00:00:07.334118 kernel: psci: PSCIv1.1 detected in firmware. Sep 13 00:00:07.334125 kernel: psci: Using standard PSCI v0.2 function IDs Sep 13 00:00:07.334131 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 13 00:00:07.334138 kernel: psci: SMC Calling Convention v1.4 Sep 13 00:00:07.334144 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Sep 13 00:00:07.334151 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Sep 13 00:00:07.334159 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 13 00:00:07.334166 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 13 00:00:07.334173 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 13 00:00:07.334179 kernel: Detected PIPT I-cache on CPU0 Sep 13 00:00:07.334186 kernel: CPU features: detected: GIC system register CPU interface Sep 13 00:00:07.334193 kernel: CPU features: detected: Hardware dirty bit management Sep 13 00:00:07.334200 kernel: CPU features: detected: Spectre-BHB Sep 13 00:00:07.334207 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 13 00:00:07.334213 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 13 00:00:07.334220 kernel: CPU features: detected: ARM erratum 1418040 Sep 13 00:00:07.334227 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Sep 13 00:00:07.334235 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 13 00:00:07.334242 kernel: alternatives: applying boot alternatives Sep 13 00:00:07.334250 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 13 00:00:07.334257 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:00:07.334264 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:00:07.334271 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:00:07.334278 kernel: Fallback order for Node 0: 0 Sep 13 00:00:07.334284 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Sep 13 00:00:07.334291 kernel: Policy zone: Normal Sep 13 00:00:07.334298 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:00:07.334304 kernel: software IO TLB: area num 2. Sep 13 00:00:07.334313 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Sep 13 00:00:07.334320 kernel: Memory: 3982564K/4194160K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 211596K reserved, 0K cma-reserved) Sep 13 00:00:07.334327 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:00:07.334333 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:00:07.334341 kernel: rcu: RCU event tracing is enabled. Sep 13 00:00:07.334348 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:00:07.334355 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:00:07.334362 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:00:07.334368 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:00:07.334375 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:00:07.334382 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 13 00:00:07.334390 kernel: GICv3: 960 SPIs implemented Sep 13 00:00:07.334397 kernel: GICv3: 0 Extended SPIs implemented Sep 13 00:00:07.334403 kernel: Root IRQ handler: gic_handle_irq Sep 13 00:00:07.334410 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 13 00:00:07.334417 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 13 00:00:07.334423 kernel: ITS: No ITS available, not enabling LPIs Sep 13 00:00:07.334430 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:00:07.334437 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:00:07.334444 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 13 00:00:07.334451 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 13 00:00:07.334458 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 13 00:00:07.334466 kernel: Console: colour dummy device 80x25 Sep 13 00:00:07.334473 kernel: printk: console [tty1] enabled Sep 13 00:00:07.334481 kernel: ACPI: Core revision 20230628 Sep 13 00:00:07.336522 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 13 00:00:07.336537 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:00:07.336545 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:00:07.336552 kernel: landlock: Up and running. Sep 13 00:00:07.336559 kernel: SELinux: Initializing. Sep 13 00:00:07.336567 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:00:07.336574 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:00:07.336585 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:00:07.336593 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:00:07.336600 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Sep 13 00:00:07.336607 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Sep 13 00:00:07.336614 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 13 00:00:07.336621 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:00:07.336629 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:00:07.336643 kernel: Remapping and enabling EFI services. Sep 13 00:00:07.336650 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:00:07.336657 kernel: Detected PIPT I-cache on CPU1 Sep 13 00:00:07.336665 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 13 00:00:07.336674 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:00:07.336681 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 13 00:00:07.336688 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:00:07.336696 kernel: SMP: Total of 2 processors activated. Sep 13 00:00:07.336703 kernel: CPU features: detected: 32-bit EL0 Support Sep 13 00:00:07.336712 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 13 00:00:07.336719 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 13 00:00:07.336727 kernel: CPU features: detected: CRC32 instructions Sep 13 00:00:07.336734 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 13 00:00:07.336741 kernel: CPU features: detected: LSE atomic instructions Sep 13 00:00:07.336749 kernel: CPU features: detected: Privileged Access Never Sep 13 00:00:07.336756 kernel: CPU: All CPU(s) started at EL1 Sep 13 00:00:07.336763 kernel: alternatives: applying system-wide alternatives Sep 13 00:00:07.336771 kernel: devtmpfs: initialized Sep 13 00:00:07.336780 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:00:07.336787 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:00:07.336795 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:00:07.336803 kernel: SMBIOS 3.1.0 present. Sep 13 00:00:07.336810 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 13 00:00:07.336818 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:00:07.336826 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 13 00:00:07.336833 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 13 00:00:07.336841 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 13 00:00:07.336850 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:00:07.336857 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Sep 13 00:00:07.336865 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:00:07.336872 kernel: cpuidle: using governor menu Sep 13 00:00:07.336879 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 13 00:00:07.336887 kernel: ASID allocator initialised with 32768 entries Sep 13 00:00:07.336894 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:00:07.336901 kernel: Serial: AMBA PL011 UART driver Sep 13 00:00:07.336909 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 13 00:00:07.336918 kernel: Modules: 0 pages in range for non-PLT usage Sep 13 00:00:07.336925 kernel: Modules: 508992 pages in range for PLT usage Sep 13 00:00:07.336932 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:00:07.336940 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:00:07.336947 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 13 00:00:07.336955 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 13 00:00:07.336962 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:00:07.336970 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:00:07.336977 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 13 00:00:07.336986 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 13 00:00:07.336994 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:00:07.337001 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:00:07.337008 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:00:07.337016 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:00:07.337023 kernel: ACPI: Interpreter enabled Sep 13 00:00:07.337030 kernel: ACPI: Using GIC for interrupt routing Sep 13 00:00:07.337038 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 13 00:00:07.337045 kernel: printk: console [ttyAMA0] enabled Sep 13 00:00:07.337054 kernel: printk: bootconsole [pl11] disabled Sep 13 00:00:07.337061 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 13 00:00:07.337069 kernel: iommu: Default domain type: Translated Sep 13 00:00:07.337076 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 13 00:00:07.337084 kernel: efivars: Registered efivars operations Sep 13 00:00:07.337091 kernel: vgaarb: loaded Sep 13 00:00:07.337098 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 13 00:00:07.337106 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:00:07.337113 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:00:07.337122 kernel: pnp: PnP ACPI init Sep 13 00:00:07.337130 kernel: pnp: PnP ACPI: found 0 devices Sep 13 00:00:07.337137 kernel: NET: Registered PF_INET protocol family Sep 13 00:00:07.337145 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:00:07.337152 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:00:07.337160 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:00:07.337167 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:00:07.337175 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 13 00:00:07.337182 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:00:07.337191 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:00:07.337198 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:00:07.337206 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:00:07.337213 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:00:07.337220 kernel: kvm [1]: HYP mode not available Sep 13 00:00:07.337228 kernel: Initialise system trusted keyrings Sep 13 00:00:07.337235 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:00:07.337242 kernel: Key type asymmetric registered Sep 13 00:00:07.337250 kernel: Asymmetric key parser 'x509' registered Sep 13 00:00:07.337258 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 13 00:00:07.337266 kernel: io scheduler mq-deadline registered Sep 13 00:00:07.337273 kernel: io scheduler kyber registered Sep 13 00:00:07.337281 kernel: io scheduler bfq registered Sep 13 00:00:07.337288 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:00:07.337295 kernel: thunder_xcv, ver 1.0 Sep 13 00:00:07.337303 kernel: thunder_bgx, ver 1.0 Sep 13 00:00:07.337310 kernel: nicpf, ver 1.0 Sep 13 00:00:07.337317 kernel: nicvf, ver 1.0 Sep 13 00:00:07.337452 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 13 00:00:07.337540 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-13T00:00:06 UTC (1757721606) Sep 13 00:00:07.337552 kernel: efifb: probing for efifb Sep 13 00:00:07.337560 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 13 00:00:07.337567 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 13 00:00:07.337575 kernel: efifb: scrolling: redraw Sep 13 00:00:07.337582 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 13 00:00:07.337590 kernel: Console: switching to colour frame buffer device 128x48 Sep 13 00:00:07.337599 kernel: fb0: EFI VGA frame buffer device Sep 13 00:00:07.337606 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 13 00:00:07.337614 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 00:00:07.337621 kernel: No ACPI PMU IRQ for CPU0 Sep 13 00:00:07.337628 kernel: No ACPI PMU IRQ for CPU1 Sep 13 00:00:07.337636 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Sep 13 00:00:07.337643 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 13 00:00:07.337650 kernel: watchdog: Hard watchdog permanently disabled Sep 13 00:00:07.337658 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:00:07.337667 kernel: Segment Routing with IPv6 Sep 13 00:00:07.337674 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:00:07.337681 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:00:07.337689 kernel: Key type dns_resolver registered Sep 13 00:00:07.337696 kernel: registered taskstats version 1 Sep 13 00:00:07.337703 kernel: Loading compiled-in X.509 certificates Sep 13 00:00:07.337711 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 036ad4721a31543be5c000f2896b40d1e5515c6e' Sep 13 00:00:07.337718 kernel: Key type .fscrypt registered Sep 13 00:00:07.337725 kernel: Key type fscrypt-provisioning registered Sep 13 00:00:07.337734 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:00:07.337741 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:00:07.337749 kernel: ima: No architecture policies found Sep 13 00:00:07.337756 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 13 00:00:07.337763 kernel: clk: Disabling unused clocks Sep 13 00:00:07.337771 kernel: Freeing unused kernel memory: 39488K Sep 13 00:00:07.337778 kernel: Run /init as init process Sep 13 00:00:07.337785 kernel: with arguments: Sep 13 00:00:07.337792 kernel: /init Sep 13 00:00:07.337801 kernel: with environment: Sep 13 00:00:07.337808 kernel: HOME=/ Sep 13 00:00:07.337815 kernel: TERM=linux Sep 13 00:00:07.337822 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:00:07.337831 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:00:07.337841 systemd[1]: Detected virtualization microsoft. Sep 13 00:00:07.337849 systemd[1]: Detected architecture arm64. Sep 13 00:00:07.337857 systemd[1]: Running in initrd. Sep 13 00:00:07.337866 systemd[1]: No hostname configured, using default hostname. Sep 13 00:00:07.337874 systemd[1]: Hostname set to . Sep 13 00:00:07.337882 systemd[1]: Initializing machine ID from random generator. Sep 13 00:00:07.337890 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:00:07.337898 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:00:07.337906 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:00:07.337914 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:00:07.337923 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:00:07.337932 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:00:07.337940 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:00:07.337949 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:00:07.337958 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:00:07.337966 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:00:07.337974 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:00:07.337983 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:00:07.337991 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:00:07.337999 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:00:07.338007 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:00:07.338015 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:00:07.338023 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:00:07.338031 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:00:07.338039 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:00:07.338047 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:00:07.338056 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:00:07.338064 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:00:07.338072 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:00:07.338080 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:00:07.338088 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:00:07.338096 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:00:07.338104 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:00:07.338112 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:00:07.338120 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:00:07.338145 systemd-journald[217]: Collecting audit messages is disabled. Sep 13 00:00:07.338165 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:00:07.338174 systemd-journald[217]: Journal started Sep 13 00:00:07.338194 systemd-journald[217]: Runtime Journal (/run/log/journal/8a2a9b3b469e4fafba83a653fedf76c7) is 8.0M, max 78.5M, 70.5M free. Sep 13 00:00:07.353729 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:00:07.347990 systemd-modules-load[218]: Inserted module 'overlay' Sep 13 00:00:07.361828 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:00:07.397596 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:00:07.391232 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:00:07.414890 kernel: Bridge firewalling registered Sep 13 00:00:07.408083 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:00:07.414055 systemd-modules-load[218]: Inserted module 'br_netfilter' Sep 13 00:00:07.420196 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:00:07.431424 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:00:07.456701 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:00:07.465666 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:00:07.491603 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:00:07.502652 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:00:07.523207 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:00:07.530880 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:00:07.537669 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:00:07.554602 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:00:07.584749 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:00:07.592672 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:00:07.615871 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:00:07.634258 dracut-cmdline[251]: dracut-dracut-053 Sep 13 00:00:07.638583 systemd-resolved[253]: Positive Trust Anchors: Sep 13 00:00:07.655461 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 13 00:00:07.638594 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:00:07.638626 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:00:07.642190 systemd-resolved[253]: Defaulting to hostname 'linux'. Sep 13 00:00:07.643810 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:00:07.651735 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:00:07.663118 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:00:07.798510 kernel: SCSI subsystem initialized Sep 13 00:00:07.805503 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:00:07.815503 kernel: iscsi: registered transport (tcp) Sep 13 00:00:07.834699 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:00:07.834758 kernel: QLogic iSCSI HBA Driver Sep 13 00:00:07.867644 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:00:07.881966 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:00:07.913010 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:00:07.913038 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:00:07.919419 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:00:07.969523 kernel: raid6: neonx8 gen() 15742 MB/s Sep 13 00:00:07.989505 kernel: raid6: neonx4 gen() 15653 MB/s Sep 13 00:00:08.009498 kernel: raid6: neonx2 gen() 13237 MB/s Sep 13 00:00:08.030498 kernel: raid6: neonx1 gen() 10456 MB/s Sep 13 00:00:08.050495 kernel: raid6: int64x8 gen() 6960 MB/s Sep 13 00:00:08.070495 kernel: raid6: int64x4 gen() 7359 MB/s Sep 13 00:00:08.091496 kernel: raid6: int64x2 gen() 6133 MB/s Sep 13 00:00:08.114921 kernel: raid6: int64x1 gen() 5062 MB/s Sep 13 00:00:08.114941 kernel: raid6: using algorithm neonx8 gen() 15742 MB/s Sep 13 00:00:08.139141 kernel: raid6: .... xor() 11897 MB/s, rmw enabled Sep 13 00:00:08.139178 kernel: raid6: using neon recovery algorithm Sep 13 00:00:08.151248 kernel: xor: measuring software checksum speed Sep 13 00:00:08.151280 kernel: 8regs : 19759 MB/sec Sep 13 00:00:08.158368 kernel: 32regs : 18692 MB/sec Sep 13 00:00:08.158379 kernel: arm64_neon : 27025 MB/sec Sep 13 00:00:08.162730 kernel: xor: using function: arm64_neon (27025 MB/sec) Sep 13 00:00:08.213621 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:00:08.222690 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:00:08.239621 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:00:08.262439 systemd-udevd[438]: Using default interface naming scheme 'v255'. Sep 13 00:00:08.268985 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:00:08.289608 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:00:08.320680 dracut-pre-trigger[452]: rd.md=0: removing MD RAID activation Sep 13 00:00:08.346969 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:00:08.361766 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:00:08.400359 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:00:08.418638 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:00:08.442154 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:00:08.460041 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:00:08.475561 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:00:08.490251 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:00:08.511521 kernel: hv_vmbus: Vmbus version:5.3 Sep 13 00:00:08.512613 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:00:08.530072 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:00:08.570826 kernel: hv_vmbus: registering driver hv_netvsc Sep 13 00:00:08.570851 kernel: hv_vmbus: registering driver hid_hyperv Sep 13 00:00:08.570861 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:00:08.570871 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 13 00:00:08.570880 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:00:08.570889 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Sep 13 00:00:08.530216 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:00:08.617700 kernel: hv_vmbus: registering driver hv_storvsc Sep 13 00:00:08.617722 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 13 00:00:08.617861 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Sep 13 00:00:08.617872 kernel: scsi host0: storvsc_host_t Sep 13 00:00:08.617895 kernel: scsi host1: storvsc_host_t Sep 13 00:00:08.601805 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:00:08.643346 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 13 00:00:08.607875 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:00:08.660517 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 13 00:00:08.608157 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:00:08.638237 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:00:08.674776 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:00:08.703302 kernel: hv_netvsc 000d3ac3-530c-000d-3ac3-530c000d3ac3 eth0: VF slot 1 added Sep 13 00:00:08.703453 kernel: PTP clock support registered Sep 13 00:00:08.682318 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:00:08.728803 kernel: hv_utils: Registering HyperV Utility Driver Sep 13 00:00:08.728827 kernel: hv_vmbus: registering driver hv_utils Sep 13 00:00:08.726620 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:00:08.751648 kernel: hv_utils: Heartbeat IC version 3.0 Sep 13 00:00:08.751671 kernel: hv_utils: Shutdown IC version 3.2 Sep 13 00:00:08.726722 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:00:08.525531 kernel: hv_utils: TimeSync IC version 4.0 Sep 13 00:00:08.539038 systemd-journald[217]: Time jumped backwards, rotating. Sep 13 00:00:08.525267 systemd-resolved[253]: Clock change detected. Flushing caches. Sep 13 00:00:08.556188 kernel: hv_vmbus: registering driver hv_pci Sep 13 00:00:08.556208 kernel: hv_pci 3ec20755-cebe-4243-ac69-80ce2bff8f28: PCI VMBus probing: Using version 0x10004 Sep 13 00:00:08.525535 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:00:08.570821 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 13 00:00:08.570975 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:00:08.583686 kernel: hv_pci 3ec20755-cebe-4243-ac69-80ce2bff8f28: PCI host bridge to bus cebe:00 Sep 13 00:00:08.583862 kernel: pci_bus cebe:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 13 00:00:08.584000 kernel: pci_bus cebe:00: No busn resource found for root bus, will use [bus 00-ff] Sep 13 00:00:08.596122 kernel: pci cebe:00:02.0: [15b3:1018] type 00 class 0x020000 Sep 13 00:00:08.600069 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:00:08.631714 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 13 00:00:08.631886 kernel: pci cebe:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 13 00:00:08.631908 kernel: pci cebe:00:02.0: enabling Extended Tags Sep 13 00:00:08.631921 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 13 00:00:08.636726 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 13 00:00:08.636983 kernel: pci cebe:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at cebe:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Sep 13 00:00:08.651503 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 13 00:00:08.651674 kernel: pci_bus cebe:00: busn_res: [bus 00-ff] end is updated to 00 Sep 13 00:00:08.656851 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 13 00:00:08.656982 kernel: pci cebe:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 13 00:00:08.656421 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:00:08.687332 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 13 00:00:08.693012 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:00:08.718429 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:00:08.718451 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 13 00:00:08.756767 kernel: mlx5_core cebe:00:02.0: enabling device (0000 -> 0002) Sep 13 00:00:08.763035 kernel: mlx5_core cebe:00:02.0: firmware version: 16.30.1284 Sep 13 00:00:08.961783 kernel: hv_netvsc 000d3ac3-530c-000d-3ac3-530c000d3ac3 eth0: VF registering: eth1 Sep 13 00:00:08.961980 kernel: mlx5_core cebe:00:02.0 eth1: joined to eth0 Sep 13 00:00:08.969081 kernel: mlx5_core cebe:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Sep 13 00:00:08.979040 kernel: mlx5_core cebe:00:02.0 enP52926s1: renamed from eth1 Sep 13 00:00:09.463137 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 13 00:00:09.489218 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (484) Sep 13 00:00:09.489268 kernel: BTRFS: device fsid 29bc4da8-c689-46a2-a16a-b7bbc722db77 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (488) Sep 13 00:00:09.513242 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 13 00:00:09.527114 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 13 00:00:09.543242 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 13 00:00:09.551105 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 13 00:00:09.583217 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:00:09.611040 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:00:09.620043 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:00:10.636047 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:00:10.636632 disk-uuid[607]: The operation has completed successfully. Sep 13 00:00:10.702552 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:00:10.702641 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:00:10.735149 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:00:10.751525 sh[720]: Success Sep 13 00:00:10.784052 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 13 00:00:11.233581 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:00:11.256163 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:00:11.266358 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:00:11.306903 kernel: BTRFS info (device dm-0): first mount of filesystem 29bc4da8-c689-46a2-a16a-b7bbc722db77 Sep 13 00:00:11.306955 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:00:11.314257 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:00:11.319310 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:00:11.323909 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:00:11.844706 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:00:11.850144 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:00:11.870269 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:00:11.881832 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:00:11.916733 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:00:11.916776 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:00:11.921996 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:00:11.983509 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:00:12.005962 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:00:12.019262 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:00:12.028374 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:00:12.042602 kernel: BTRFS info (device sda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:00:12.040651 systemd-networkd[898]: lo: Link UP Sep 13 00:00:12.040654 systemd-networkd[898]: lo: Gained carrier Sep 13 00:00:12.042435 systemd-networkd[898]: Enumeration completed Sep 13 00:00:12.044454 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:00:12.046903 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:00:12.046907 systemd-networkd[898]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:00:12.057863 systemd[1]: Reached target network.target - Network. Sep 13 00:00:12.069044 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:00:12.101290 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:00:12.165810 kernel: mlx5_core cebe:00:02.0 enP52926s1: Link up Sep 13 00:00:12.166058 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 13 00:00:12.207105 kernel: hv_netvsc 000d3ac3-530c-000d-3ac3-530c000d3ac3 eth0: Data path switched to VF: enP52926s1 Sep 13 00:00:12.207320 systemd-networkd[898]: enP52926s1: Link UP Sep 13 00:00:12.207408 systemd-networkd[898]: eth0: Link UP Sep 13 00:00:12.207502 systemd-networkd[898]: eth0: Gained carrier Sep 13 00:00:12.207510 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:00:12.216207 systemd-networkd[898]: enP52926s1: Gained carrier Sep 13 00:00:12.240071 systemd-networkd[898]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 13 00:00:13.493699 ignition[909]: Ignition 2.19.0 Sep 13 00:00:13.493711 ignition[909]: Stage: fetch-offline Sep 13 00:00:13.497548 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:00:13.493746 ignition[909]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:00:13.493754 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:00:13.493844 ignition[909]: parsed url from cmdline: "" Sep 13 00:00:13.493847 ignition[909]: no config URL provided Sep 13 00:00:13.493851 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:00:13.527213 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 13 00:00:13.493858 ignition[909]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:00:13.493862 ignition[909]: failed to fetch config: resource requires networking Sep 13 00:00:13.494047 ignition[909]: Ignition finished successfully Sep 13 00:00:13.547358 ignition[917]: Ignition 2.19.0 Sep 13 00:00:13.547364 ignition[917]: Stage: fetch Sep 13 00:00:13.547564 ignition[917]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:00:13.547577 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:00:13.547685 ignition[917]: parsed url from cmdline: "" Sep 13 00:00:13.547689 ignition[917]: no config URL provided Sep 13 00:00:13.547693 ignition[917]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:00:13.547700 ignition[917]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:00:13.547743 ignition[917]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 13 00:00:13.670309 ignition[917]: GET result: OK Sep 13 00:00:13.670398 ignition[917]: config has been read from IMDS userdata Sep 13 00:00:13.670475 ignition[917]: parsing config with SHA512: 160f4f48874a72e1f5043fef45757d83e06dabb109442c460edd3fef21fab6d8c60f0bfca2f321b76570719a21032a2b9bb6d7ac1b3c515e9c3cc211bef57bf4 Sep 13 00:00:13.674094 unknown[917]: fetched base config from "system" Sep 13 00:00:13.674477 ignition[917]: fetch: fetch complete Sep 13 00:00:13.674101 unknown[917]: fetched base config from "system" Sep 13 00:00:13.674482 ignition[917]: fetch: fetch passed Sep 13 00:00:13.674106 unknown[917]: fetched user config from "azure" Sep 13 00:00:13.674521 ignition[917]: Ignition finished successfully Sep 13 00:00:13.680600 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 13 00:00:13.697253 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:00:13.718760 ignition[923]: Ignition 2.19.0 Sep 13 00:00:13.727387 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:00:13.718767 ignition[923]: Stage: kargs Sep 13 00:00:13.719056 ignition[923]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:00:13.719066 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:00:13.751180 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:00:13.720661 ignition[923]: kargs: kargs passed Sep 13 00:00:13.766718 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:00:13.720724 ignition[923]: Ignition finished successfully Sep 13 00:00:13.773512 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:00:13.763882 ignition[929]: Ignition 2.19.0 Sep 13 00:00:13.786762 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:00:13.763889 ignition[929]: Stage: disks Sep 13 00:00:13.798393 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:00:13.764107 ignition[929]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:00:13.810881 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:00:13.764116 ignition[929]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:00:13.815393 systemd-networkd[898]: eth0: Gained IPv6LL Sep 13 00:00:13.765580 ignition[929]: disks: disks passed Sep 13 00:00:13.829318 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:00:13.765633 ignition[929]: Ignition finished successfully Sep 13 00:00:13.855271 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:00:13.963791 systemd-fsck[937]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 13 00:00:13.977896 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:00:14.000197 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:00:14.063050 kernel: EXT4-fs (sda9): mounted filesystem d35fd879-6758-447b-9fdd-bb21dd7c5b2b r/w with ordered data mode. Quota mode: none. Sep 13 00:00:14.063282 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:00:14.068656 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:00:14.121103 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:00:14.149614 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (948) Sep 13 00:00:14.162894 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:00:14.162953 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:00:14.167008 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:00:14.171135 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:00:14.182104 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:00:14.186807 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 13 00:00:14.201160 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:00:14.201192 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:00:14.209292 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:00:14.224963 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:00:14.253379 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:00:14.863405 coreos-metadata[965]: Sep 13 00:00:14.863 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 13 00:00:14.871441 coreos-metadata[965]: Sep 13 00:00:14.871 INFO Fetch successful Sep 13 00:00:14.871441 coreos-metadata[965]: Sep 13 00:00:14.871 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 13 00:00:14.887819 coreos-metadata[965]: Sep 13 00:00:14.887 INFO Fetch successful Sep 13 00:00:14.893807 coreos-metadata[965]: Sep 13 00:00:14.887 INFO wrote hostname ci-4081.3.5-n-a13ccab244 to /sysroot/etc/hostname Sep 13 00:00:14.899816 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:00:15.223640 initrd-setup-root[977]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:00:15.283416 initrd-setup-root[984]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:00:15.308522 initrd-setup-root[991]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:00:15.317698 initrd-setup-root[998]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:00:16.572924 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:00:16.589200 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:00:16.596472 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:00:16.622726 kernel: BTRFS info (device sda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:00:16.622826 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:00:16.647484 ignition[1066]: INFO : Ignition 2.19.0 Sep 13 00:00:16.654247 ignition[1066]: INFO : Stage: mount Sep 13 00:00:16.654247 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:00:16.654247 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:00:16.654247 ignition[1066]: INFO : mount: mount passed Sep 13 00:00:16.654247 ignition[1066]: INFO : Ignition finished successfully Sep 13 00:00:16.658350 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:00:16.681233 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:00:16.694512 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:00:16.721298 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:00:16.748042 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1077) Sep 13 00:00:16.762654 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:00:16.762693 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:00:16.766721 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:00:16.775050 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:00:16.776193 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:00:16.805147 ignition[1094]: INFO : Ignition 2.19.0 Sep 13 00:00:16.805147 ignition[1094]: INFO : Stage: files Sep 13 00:00:16.813563 ignition[1094]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:00:16.813563 ignition[1094]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:00:16.813563 ignition[1094]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:00:16.866348 ignition[1094]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:00:16.866348 ignition[1094]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:00:16.953693 ignition[1094]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:00:16.961464 ignition[1094]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:00:16.961464 ignition[1094]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:00:16.954106 unknown[1094]: wrote ssh authorized keys file for user: core Sep 13 00:00:17.011444 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 13 00:00:17.022634 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 13 00:00:17.158952 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:00:17.620088 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 13 00:00:17.620088 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:00:17.640169 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:00:17.640169 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:00:17.640169 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:00:17.640169 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:00:17.640169 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:00:17.640169 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:00:17.640169 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:00:17.640169 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:00:17.640169 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:00:17.640169 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:00:17.640169 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:00:17.640169 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:00:17.640169 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 13 00:00:18.138514 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 00:00:18.390346 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:00:18.390346 ignition[1094]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 00:00:18.451509 ignition[1094]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:00:18.463000 ignition[1094]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:00:18.463000 ignition[1094]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 00:00:18.463000 ignition[1094]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:00:18.463000 ignition[1094]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:00:18.463000 ignition[1094]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:00:18.463000 ignition[1094]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:00:18.463000 ignition[1094]: INFO : files: files passed Sep 13 00:00:18.463000 ignition[1094]: INFO : Ignition finished successfully Sep 13 00:00:18.463586 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:00:18.504321 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:00:18.522197 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:00:18.545163 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:00:18.545254 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:00:18.587185 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:00:18.587185 initrd-setup-root-after-ignition[1123]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:00:18.605402 initrd-setup-root-after-ignition[1127]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:00:18.598059 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:00:18.612765 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:00:18.642338 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:00:18.671270 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:00:18.671453 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:00:18.683768 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:00:18.695839 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:00:18.706716 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:00:18.725269 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:00:18.747461 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:00:18.767402 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:00:18.783949 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:00:18.790771 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:00:18.804217 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:00:18.816221 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:00:18.816343 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:00:18.834526 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:00:18.840525 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:00:18.852169 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:00:18.863948 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:00:18.874758 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:00:18.886263 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:00:18.897921 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:00:18.910896 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:00:18.922110 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:00:18.934159 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:00:18.943784 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:00:18.943904 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:00:18.958985 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:00:18.965231 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:00:18.977197 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:00:18.977264 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:00:18.989657 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:00:18.989779 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:00:19.008158 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:00:19.008276 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:00:19.015232 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:00:19.015326 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:00:19.026012 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 00:00:19.026117 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:00:19.058329 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:00:19.119193 ignition[1147]: INFO : Ignition 2.19.0 Sep 13 00:00:19.119193 ignition[1147]: INFO : Stage: umount Sep 13 00:00:19.119193 ignition[1147]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:00:19.119193 ignition[1147]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 13 00:00:19.119193 ignition[1147]: INFO : umount: umount passed Sep 13 00:00:19.119193 ignition[1147]: INFO : Ignition finished successfully Sep 13 00:00:19.095230 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:00:19.108405 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:00:19.108566 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:00:19.127819 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:00:19.127927 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:00:19.144288 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:00:19.144375 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:00:19.155613 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:00:19.155831 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:00:19.167578 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:00:19.167626 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:00:19.177105 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:00:19.177144 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 13 00:00:19.186975 systemd[1]: Stopped target network.target - Network. Sep 13 00:00:19.198755 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:00:19.198813 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:00:19.210335 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:00:19.221495 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:00:19.225043 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:00:19.234644 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:00:19.246647 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:00:19.257449 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:00:19.257507 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:00:19.268248 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:00:19.268303 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:00:19.279941 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:00:19.279993 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:00:19.291955 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:00:19.291999 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:00:19.303652 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:00:19.315413 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:00:19.327243 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:00:19.327335 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:00:19.342506 systemd-networkd[898]: eth0: DHCPv6 lease lost Sep 13 00:00:19.343836 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:00:19.582247 kernel: hv_netvsc 000d3ac3-530c-000d-3ac3-530c000d3ac3 eth0: Data path switched from VF: enP52926s1 Sep 13 00:00:19.343958 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:00:19.357805 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:00:19.357966 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:00:19.371810 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:00:19.371879 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:00:19.407236 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:00:19.417518 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:00:19.417601 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:00:19.429969 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:00:19.430040 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:00:19.441219 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:00:19.441273 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:00:19.453317 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:00:19.453373 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:00:19.464971 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:00:19.480763 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:00:19.481315 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:00:19.481403 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:00:19.504426 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:00:19.504546 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:00:19.518061 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:00:19.518141 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:00:19.529231 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:00:19.529269 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:00:19.541542 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:00:19.541596 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:00:19.559667 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:00:19.559721 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:00:19.581980 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:00:19.582073 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:00:19.595973 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:00:19.596053 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:00:19.614265 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:00:19.629683 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:00:19.629758 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:00:19.644515 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 13 00:00:19.644583 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:00:19.888046 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Sep 13 00:00:19.657089 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:00:19.657143 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:00:19.669605 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:00:19.669654 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:00:19.684602 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:00:19.684705 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:00:19.697390 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:00:19.697494 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:00:19.708585 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:00:19.736252 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:00:19.752063 systemd[1]: Switching root. Sep 13 00:00:19.952204 systemd-journald[217]: Journal stopped Sep 13 00:00:29.100345 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:00:29.100367 kernel: SELinux: policy capability open_perms=1 Sep 13 00:00:29.100377 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:00:29.100385 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:00:29.100394 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:00:29.100402 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:00:29.100411 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:00:29.100419 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:00:29.100426 kernel: audit: type=1403 audit(1757721621.916:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:00:29.100436 systemd[1]: Successfully loaded SELinux policy in 275.665ms. Sep 13 00:00:29.100447 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.339ms. Sep 13 00:00:29.100457 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:00:29.100466 systemd[1]: Detected virtualization microsoft. Sep 13 00:00:29.100477 systemd[1]: Detected architecture arm64. Sep 13 00:00:29.100486 systemd[1]: Detected first boot. Sep 13 00:00:29.100497 systemd[1]: Hostname set to . Sep 13 00:00:29.100506 systemd[1]: Initializing machine ID from random generator. Sep 13 00:00:29.100515 zram_generator::config[1188]: No configuration found. Sep 13 00:00:29.100525 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:00:29.100534 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:00:29.100543 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 00:00:29.100552 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:00:29.100563 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:00:29.100572 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:00:29.100581 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:00:29.100595 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:00:29.100605 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:00:29.100614 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:00:29.100623 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:00:29.100634 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:00:29.100643 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:00:29.100652 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:00:29.100662 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:00:29.100671 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:00:29.100681 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:00:29.100690 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:00:29.100699 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 13 00:00:29.100710 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:00:29.100720 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 00:00:29.100729 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 00:00:29.100740 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 00:00:29.100750 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:00:29.100759 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:00:29.100769 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:00:29.100778 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:00:29.100789 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:00:29.100798 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:00:29.100807 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:00:29.100817 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:00:29.100826 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:00:29.100836 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:00:29.100847 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:00:29.100857 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:00:29.100866 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:00:29.100875 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:00:29.100885 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:00:29.100895 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:00:29.100905 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:00:29.100916 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:00:29.100926 systemd[1]: Reached target machines.target - Containers. Sep 13 00:00:29.100935 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:00:29.100945 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:00:29.100955 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:00:29.100964 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:00:29.100974 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:00:29.100983 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:00:29.100994 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:00:29.101004 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:00:29.101013 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:00:29.101030 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:00:29.101039 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:00:29.101049 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 00:00:29.101058 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:00:29.101067 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:00:29.101078 kernel: fuse: init (API version 7.39) Sep 13 00:00:29.101087 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:00:29.101097 kernel: loop: module loaded Sep 13 00:00:29.101105 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:00:29.101116 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:00:29.101125 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:00:29.101135 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:00:29.101144 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:00:29.101153 systemd[1]: Stopped verity-setup.service. Sep 13 00:00:29.101164 kernel: ACPI: bus type drm_connector registered Sep 13 00:00:29.101173 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:00:29.101197 systemd-journald[1274]: Collecting audit messages is disabled. Sep 13 00:00:29.101217 systemd-journald[1274]: Journal started Sep 13 00:00:29.101238 systemd-journald[1274]: Runtime Journal (/run/log/journal/d9238019442a4c3b97cd7eaab1d3d384) is 8.0M, max 78.5M, 70.5M free. Sep 13 00:00:27.867511 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:00:28.076329 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 13 00:00:28.076667 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:00:28.077001 systemd[1]: systemd-journald.service: Consumed 3.284s CPU time. Sep 13 00:00:29.117100 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:00:29.117886 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:00:29.124393 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:00:29.130156 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:00:29.137748 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:00:29.144559 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:00:29.151236 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:00:29.159563 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:00:29.167287 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:00:29.167418 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:00:29.174506 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:00:29.174623 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:00:29.181141 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:00:29.181281 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:00:29.188757 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:00:29.188888 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:00:29.197558 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:00:29.197694 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:00:29.204480 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:00:29.204614 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:00:29.210976 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:00:29.218560 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:00:29.226315 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:00:29.233737 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:00:29.252086 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:00:29.262131 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:00:29.272151 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:00:29.278845 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:00:29.278885 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:00:29.287392 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:00:29.299162 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:00:29.308209 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:00:29.316812 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:00:29.360213 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:00:29.367317 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:00:29.373744 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:00:29.375575 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:00:29.382127 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:00:29.383152 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:00:29.390693 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:00:29.404250 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:00:29.414183 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:00:29.427763 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:00:29.435853 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:00:29.444526 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:00:29.457119 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:00:29.466519 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:00:29.477427 systemd-journald[1274]: Time spent on flushing to /var/log/journal/d9238019442a4c3b97cd7eaab1d3d384 is 44.846ms for 903 entries. Sep 13 00:00:29.477427 systemd-journald[1274]: System Journal (/var/log/journal/d9238019442a4c3b97cd7eaab1d3d384) is 11.8M, max 2.6G, 2.6G free. Sep 13 00:00:29.593418 systemd-journald[1274]: Received client request to flush runtime journal. Sep 13 00:00:29.593477 kernel: loop0: detected capacity change from 0 to 31320 Sep 13 00:00:29.593523 systemd-journald[1274]: /var/log/journal/d9238019442a4c3b97cd7eaab1d3d384/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Sep 13 00:00:29.593549 systemd-journald[1274]: Rotating system journal. Sep 13 00:00:29.491394 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:00:29.498840 udevadm[1325]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:00:29.560174 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:00:29.595180 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:00:29.609950 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:00:29.611422 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:00:29.673561 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Sep 13 00:00:29.673576 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Sep 13 00:00:29.678366 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:00:29.690313 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:00:30.103044 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:00:30.285306 kernel: loop1: detected capacity change from 0 to 114328 Sep 13 00:00:30.525083 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:00:30.536205 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:00:30.555730 systemd-tmpfiles[1345]: ACLs are not supported, ignoring. Sep 13 00:00:30.555750 systemd-tmpfiles[1345]: ACLs are not supported, ignoring. Sep 13 00:00:30.559194 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:00:30.933172 kernel: loop2: detected capacity change from 0 to 203944 Sep 13 00:00:30.982037 kernel: loop3: detected capacity change from 0 to 114432 Sep 13 00:00:31.607391 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:00:31.621192 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:00:31.640053 kernel: loop4: detected capacity change from 0 to 31320 Sep 13 00:00:31.641401 systemd-udevd[1352]: Using default interface naming scheme 'v255'. Sep 13 00:00:31.654046 kernel: loop5: detected capacity change from 0 to 114328 Sep 13 00:00:31.668039 kernel: loop6: detected capacity change from 0 to 203944 Sep 13 00:00:31.689160 kernel: loop7: detected capacity change from 0 to 114432 Sep 13 00:00:31.705594 (sd-merge)[1353]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 13 00:00:31.706014 (sd-merge)[1353]: Merged extensions into '/usr'. Sep 13 00:00:31.709272 systemd[1]: Reloading requested from client PID 1322 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:00:31.709285 systemd[1]: Reloading... Sep 13 00:00:31.773056 zram_generator::config[1379]: No configuration found. Sep 13 00:00:31.903895 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:00:31.959713 systemd[1]: Reloading finished in 250 ms. Sep 13 00:00:31.988919 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:00:32.011199 systemd[1]: Starting ensure-sysext.service... Sep 13 00:00:32.016703 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:00:32.065550 systemd[1]: Reloading requested from client PID 1434 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:00:32.065569 systemd[1]: Reloading... Sep 13 00:00:32.107488 systemd-tmpfiles[1435]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:00:32.107768 systemd-tmpfiles[1435]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:00:32.108444 systemd-tmpfiles[1435]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:00:32.108656 systemd-tmpfiles[1435]: ACLs are not supported, ignoring. Sep 13 00:00:32.108701 systemd-tmpfiles[1435]: ACLs are not supported, ignoring. Sep 13 00:00:32.138045 zram_generator::config[1459]: No configuration found. Sep 13 00:00:32.180184 systemd-tmpfiles[1435]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:00:32.180195 systemd-tmpfiles[1435]: Skipping /boot Sep 13 00:00:32.189154 systemd-tmpfiles[1435]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:00:32.189288 systemd-tmpfiles[1435]: Skipping /boot Sep 13 00:00:32.250729 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:00:32.308418 systemd[1]: Reloading finished in 242 ms. Sep 13 00:00:32.339708 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:00:32.355286 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:00:32.429292 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:00:32.438010 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:00:32.449291 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:00:32.465575 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:00:32.483100 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Sep 13 00:00:32.489011 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:00:32.496473 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:00:32.504585 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:00:32.515131 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:00:32.526378 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:00:32.533084 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:00:32.533282 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:00:32.540129 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:00:32.540290 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:00:32.547651 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:00:32.547793 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:00:32.555641 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:00:32.555787 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:00:32.565469 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:00:32.577993 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:00:32.578486 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:00:32.593466 systemd[1]: Finished ensure-sysext.service. Sep 13 00:00:32.612291 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:00:32.651431 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:00:32.660633 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:00:32.660717 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:00:32.663954 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:00:32.671698 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 13 00:00:32.748355 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:00:32.769439 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Sep 13 00:00:32.816329 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:00:32.819037 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:00:32.839874 augenrules[1589]: No rules Sep 13 00:00:32.843478 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:00:32.863646 kernel: hv_vmbus: registering driver hv_balloon Sep 13 00:00:32.863740 kernel: hv_vmbus: registering driver hyperv_fb Sep 13 00:00:32.863761 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 13 00:00:32.867040 kernel: hv_balloon: Memory hot add disabled on ARM64 Sep 13 00:00:32.875212 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 13 00:00:32.885372 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 13 00:00:32.891312 kernel: Console: switching to colour dummy device 80x25 Sep 13 00:00:32.898634 kernel: Console: switching to colour frame buffer device 128x48 Sep 13 00:00:32.903452 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:00:32.931722 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:00:32.931974 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:00:32.942189 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:00:32.981104 systemd-resolved[1525]: Positive Trust Anchors: Sep 13 00:00:32.981421 systemd-resolved[1525]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:00:32.981500 systemd-resolved[1525]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:00:33.007087 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1568) Sep 13 00:00:33.051513 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 13 00:00:33.060918 systemd-resolved[1525]: Using system hostname 'ci-4081.3.5-n-a13ccab244'. Sep 13 00:00:33.068225 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:00:33.075765 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:00:33.082800 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:00:33.114667 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:00:33.128191 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:00:33.136701 systemd-networkd[1567]: lo: Link UP Sep 13 00:00:33.136958 systemd-networkd[1567]: lo: Gained carrier Sep 13 00:00:33.139144 systemd-networkd[1567]: Enumeration completed Sep 13 00:00:33.139547 systemd-networkd[1567]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:00:33.139670 systemd-networkd[1567]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:00:33.140114 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:00:33.148275 systemd[1]: Reached target network.target - Network. Sep 13 00:00:33.156287 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:00:33.176529 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:00:33.220048 kernel: mlx5_core cebe:00:02.0 enP52926s1: Link up Sep 13 00:00:33.220376 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 13 00:00:33.228514 lvm[1650]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:00:33.253523 kernel: hv_netvsc 000d3ac3-530c-000d-3ac3-530c000d3ac3 eth0: Data path switched to VF: enP52926s1 Sep 13 00:00:33.253911 systemd-networkd[1567]: enP52926s1: Link UP Sep 13 00:00:33.254386 systemd-networkd[1567]: eth0: Link UP Sep 13 00:00:33.254397 systemd-networkd[1567]: eth0: Gained carrier Sep 13 00:00:33.254412 systemd-networkd[1567]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:00:33.259295 systemd-networkd[1567]: enP52926s1: Gained carrier Sep 13 00:00:33.267073 systemd-networkd[1567]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 13 00:00:33.274655 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:00:33.282509 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:00:33.294167 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:00:33.306131 lvm[1655]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:00:33.330106 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:00:34.536161 systemd-networkd[1567]: eth0: Gained IPv6LL Sep 13 00:00:34.538717 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:00:34.546835 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:00:34.644170 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:00:35.899857 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:00:35.907936 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:00:40.296052 ldconfig[1317]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:00:40.312127 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:00:40.325150 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:00:40.366925 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:00:40.373514 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:00:40.379466 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:00:40.387376 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:00:40.396128 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:00:40.402091 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:00:40.409094 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:00:40.415964 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:00:40.415999 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:00:40.420978 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:00:40.443890 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:00:40.451694 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:00:40.482827 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:00:40.490087 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:00:40.496380 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:00:40.501775 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:00:40.507092 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:00:40.507121 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:00:40.529121 systemd[1]: Starting chronyd.service - NTP client/server... Sep 13 00:00:40.536163 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:00:40.553443 (chronyd)[1667]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Sep 13 00:00:40.554299 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 13 00:00:40.563393 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:00:40.569837 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:00:40.576804 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:00:40.582667 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:00:40.582715 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Sep 13 00:00:40.583814 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 13 00:00:40.591702 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 13 00:00:40.593273 KVP[1675]: KVP starting; pid is:1675 Sep 13 00:00:40.594533 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:00:40.602855 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:00:40.619399 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:00:40.621075 jq[1673]: false Sep 13 00:00:40.621981 chronyd[1679]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Sep 13 00:00:40.635307 KVP[1675]: KVP LIC Version: 3.1 Sep 13 00:00:40.636040 kernel: hv_utils: KVP IC version 4.0 Sep 13 00:00:40.636375 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:00:40.645059 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:00:40.657337 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:00:40.666582 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:00:40.675913 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:00:40.676902 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:00:40.679327 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:00:40.691233 chronyd[1679]: Timezone right/UTC failed leap second check, ignoring Sep 13 00:00:40.704487 extend-filesystems[1674]: Found loop4 Sep 13 00:00:40.704487 extend-filesystems[1674]: Found loop5 Sep 13 00:00:40.704487 extend-filesystems[1674]: Found loop6 Sep 13 00:00:40.704487 extend-filesystems[1674]: Found loop7 Sep 13 00:00:40.704487 extend-filesystems[1674]: Found sda Sep 13 00:00:40.704487 extend-filesystems[1674]: Found sda1 Sep 13 00:00:40.704487 extend-filesystems[1674]: Found sda2 Sep 13 00:00:40.695284 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:00:40.691458 chronyd[1679]: Loaded seccomp filter (level 2) Sep 13 00:00:40.771897 extend-filesystems[1674]: Found sda3 Sep 13 00:00:40.771897 extend-filesystems[1674]: Found usr Sep 13 00:00:40.771897 extend-filesystems[1674]: Found sda4 Sep 13 00:00:40.771897 extend-filesystems[1674]: Found sda6 Sep 13 00:00:40.771897 extend-filesystems[1674]: Found sda7 Sep 13 00:00:40.771897 extend-filesystems[1674]: Found sda9 Sep 13 00:00:40.771897 extend-filesystems[1674]: Checking size of /dev/sda9 Sep 13 00:00:40.706960 systemd[1]: Started chronyd.service - NTP client/server. Sep 13 00:00:40.886534 update_engine[1689]: I20250913 00:00:40.825644 1689 main.cc:92] Flatcar Update Engine starting Sep 13 00:00:40.888086 extend-filesystems[1674]: Old size kept for /dev/sda9 Sep 13 00:00:40.888086 extend-filesystems[1674]: Found sr0 Sep 13 00:00:40.730293 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:00:40.912961 jq[1690]: true Sep 13 00:00:40.730482 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:00:40.733360 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:00:40.733544 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:00:40.753813 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:00:40.913541 jq[1707]: true Sep 13 00:00:40.755085 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:00:40.781445 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:00:40.794097 systemd-logind[1688]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Sep 13 00:00:40.794284 systemd-logind[1688]: New seat seat0. Sep 13 00:00:40.799310 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:00:40.814348 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:00:40.815102 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:00:40.838469 (ntainerd)[1714]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:00:40.920071 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1715) Sep 13 00:00:40.958506 tar[1701]: linux-arm64/helm Sep 13 00:00:41.007660 bash[1748]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:00:41.012465 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:00:41.023229 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 13 00:00:41.083300 dbus-daemon[1670]: [system] SELinux support is enabled Sep 13 00:00:41.083502 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:00:41.096602 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:00:41.096643 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:00:41.106506 update_engine[1689]: I20250913 00:00:41.106443 1689 update_check_scheduler.cc:74] Next update check in 10m43s Sep 13 00:00:41.107609 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:00:41.107647 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:00:41.117939 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:00:41.123821 dbus-daemon[1670]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 13 00:00:41.134334 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:00:41.232936 coreos-metadata[1669]: Sep 13 00:00:41.232 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 13 00:00:41.237303 coreos-metadata[1669]: Sep 13 00:00:41.237 INFO Fetch successful Sep 13 00:00:41.237913 coreos-metadata[1669]: Sep 13 00:00:41.237 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 13 00:00:41.243497 coreos-metadata[1669]: Sep 13 00:00:41.243 INFO Fetch successful Sep 13 00:00:41.243497 coreos-metadata[1669]: Sep 13 00:00:41.243 INFO Fetching http://168.63.129.16/machine/c2744d38-8f8a-42e4-9d26-a6f78f38fb7e/741115d5%2D3858%2D4b44%2Daacf%2D15b6342aad30.%5Fci%2D4081.3.5%2Dn%2Da13ccab244?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 13 00:00:41.247678 coreos-metadata[1669]: Sep 13 00:00:41.247 INFO Fetch successful Sep 13 00:00:41.247678 coreos-metadata[1669]: Sep 13 00:00:41.247 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 13 00:00:41.260185 coreos-metadata[1669]: Sep 13 00:00:41.260 INFO Fetch successful Sep 13 00:00:41.299065 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 13 00:00:41.306401 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:00:41.451121 locksmithd[1778]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:00:41.658412 tar[1701]: linux-arm64/LICENSE Sep 13 00:00:41.658527 tar[1701]: linux-arm64/README.md Sep 13 00:00:41.670382 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:00:41.800144 containerd[1714]: time="2025-09-13T00:00:41.799963540Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:00:41.842503 sshd_keygen[1706]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:00:41.857862 containerd[1714]: time="2025-09-13T00:00:41.857804500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:00:41.862303 containerd[1714]: time="2025-09-13T00:00:41.862252180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:00:41.863149 containerd[1714]: time="2025-09-13T00:00:41.863069020Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:00:41.863554 containerd[1714]: time="2025-09-13T00:00:41.863534860Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:00:41.864096 containerd[1714]: time="2025-09-13T00:00:41.863780700Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:00:41.864194 containerd[1714]: time="2025-09-13T00:00:41.864177180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:00:41.864331 containerd[1714]: time="2025-09-13T00:00:41.864312060Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:00:41.864393 containerd[1714]: time="2025-09-13T00:00:41.864379580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:00:41.864531 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:00:41.866051 containerd[1714]: time="2025-09-13T00:00:41.864636980Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:00:41.866051 containerd[1714]: time="2025-09-13T00:00:41.864656100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:00:41.866051 containerd[1714]: time="2025-09-13T00:00:41.864670300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:00:41.866051 containerd[1714]: time="2025-09-13T00:00:41.864681980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:00:41.866051 containerd[1714]: time="2025-09-13T00:00:41.864756980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:00:41.866051 containerd[1714]: time="2025-09-13T00:00:41.864944020Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:00:41.866051 containerd[1714]: time="2025-09-13T00:00:41.865064100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:00:41.866051 containerd[1714]: time="2025-09-13T00:00:41.865079700Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:00:41.866051 containerd[1714]: time="2025-09-13T00:00:41.865182740Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:00:41.866051 containerd[1714]: time="2025-09-13T00:00:41.865225140Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:00:41.878035 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:00:41.888683 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 13 00:00:41.899466 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:00:41.901058 containerd[1714]: time="2025-09-13T00:00:41.900179900Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:00:41.901058 containerd[1714]: time="2025-09-13T00:00:41.900332740Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:00:41.901058 containerd[1714]: time="2025-09-13T00:00:41.900358580Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:00:41.901058 containerd[1714]: time="2025-09-13T00:00:41.900375620Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:00:41.901058 containerd[1714]: time="2025-09-13T00:00:41.900391100Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:00:41.901058 containerd[1714]: time="2025-09-13T00:00:41.900564660Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:00:41.901058 containerd[1714]: time="2025-09-13T00:00:41.900802340Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:00:41.901058 containerd[1714]: time="2025-09-13T00:00:41.900907020Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:00:41.901058 containerd[1714]: time="2025-09-13T00:00:41.900923540Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:00:41.901058 containerd[1714]: time="2025-09-13T00:00:41.900936540Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:00:41.901058 containerd[1714]: time="2025-09-13T00:00:41.900950140Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:00:41.901058 containerd[1714]: time="2025-09-13T00:00:41.900962780Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:00:41.901058 containerd[1714]: time="2025-09-13T00:00:41.900976740Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:00:41.901058 containerd[1714]: time="2025-09-13T00:00:41.900992220Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:00:41.900479 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:00:41.909641 containerd[1714]: time="2025-09-13T00:00:41.901007140Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:00:41.909641 containerd[1714]: time="2025-09-13T00:00:41.907705900Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:00:41.909641 containerd[1714]: time="2025-09-13T00:00:41.907783060Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:00:41.909641 containerd[1714]: time="2025-09-13T00:00:41.907811220Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:00:41.909641 containerd[1714]: time="2025-09-13T00:00:41.907840500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:00:41.909641 containerd[1714]: time="2025-09-13T00:00:41.907860900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:00:41.909641 containerd[1714]: time="2025-09-13T00:00:41.907886060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:00:41.909641 containerd[1714]: time="2025-09-13T00:00:41.907902900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:00:41.909641 containerd[1714]: time="2025-09-13T00:00:41.907921220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:00:41.909641 containerd[1714]: time="2025-09-13T00:00:41.907942260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:00:41.909641 containerd[1714]: time="2025-09-13T00:00:41.908085980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:00:41.909641 containerd[1714]: time="2025-09-13T00:00:41.908107460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:00:41.909641 containerd[1714]: time="2025-09-13T00:00:41.908124700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:00:41.909641 containerd[1714]: time="2025-09-13T00:00:41.908193420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:00:41.909991 containerd[1714]: time="2025-09-13T00:00:41.908211500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:00:41.909991 containerd[1714]: time="2025-09-13T00:00:41.908320980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:00:41.909991 containerd[1714]: time="2025-09-13T00:00:41.908337740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:00:41.909991 containerd[1714]: time="2025-09-13T00:00:41.908354860Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:00:41.909991 containerd[1714]: time="2025-09-13T00:00:41.908467460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:00:41.909991 containerd[1714]: time="2025-09-13T00:00:41.908497780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:00:41.909991 containerd[1714]: time="2025-09-13T00:00:41.908510820Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:00:41.909991 containerd[1714]: time="2025-09-13T00:00:41.908645980Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:00:41.909991 containerd[1714]: time="2025-09-13T00:00:41.908665460Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:00:41.909991 containerd[1714]: time="2025-09-13T00:00:41.908677180Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:00:41.909991 containerd[1714]: time="2025-09-13T00:00:41.908826260Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:00:41.909991 containerd[1714]: time="2025-09-13T00:00:41.908838340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:00:41.909991 containerd[1714]: time="2025-09-13T00:00:41.908861060Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:00:41.909991 containerd[1714]: time="2025-09-13T00:00:41.908871420Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:00:41.910298 containerd[1714]: time="2025-09-13T00:00:41.908881540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:00:41.910319 containerd[1714]: time="2025-09-13T00:00:41.909233780Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:00:41.910319 containerd[1714]: time="2025-09-13T00:00:41.909293020Z" level=info msg="Connect containerd service" Sep 13 00:00:41.910319 containerd[1714]: time="2025-09-13T00:00:41.909333580Z" level=info msg="using legacy CRI server" Sep 13 00:00:41.910319 containerd[1714]: time="2025-09-13T00:00:41.909340900Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:00:41.910319 containerd[1714]: time="2025-09-13T00:00:41.909438420Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:00:41.911184 containerd[1714]: time="2025-09-13T00:00:41.911146500Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:00:41.912112 containerd[1714]: time="2025-09-13T00:00:41.911861140Z" level=info msg="Start subscribing containerd event" Sep 13 00:00:41.912112 containerd[1714]: time="2025-09-13T00:00:41.911927940Z" level=info msg="Start recovering state" Sep 13 00:00:41.912112 containerd[1714]: time="2025-09-13T00:00:41.912026420Z" level=info msg="Start event monitor" Sep 13 00:00:41.912112 containerd[1714]: time="2025-09-13T00:00:41.912041500Z" level=info msg="Start snapshots syncer" Sep 13 00:00:41.912112 containerd[1714]: time="2025-09-13T00:00:41.912052180Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:00:41.912112 containerd[1714]: time="2025-09-13T00:00:41.912059220Z" level=info msg="Start streaming server" Sep 13 00:00:41.912861 containerd[1714]: time="2025-09-13T00:00:41.912753620Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:00:41.912861 containerd[1714]: time="2025-09-13T00:00:41.912819220Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:00:41.913194 containerd[1714]: time="2025-09-13T00:00:41.913061300Z" level=info msg="containerd successfully booted in 0.114670s" Sep 13 00:00:41.924927 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:00:41.933496 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:00:41.958220 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 13 00:00:41.973318 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:00:41.981122 (kubelet)[1826]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:00:41.989281 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:00:42.005359 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:00:42.012299 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 13 00:00:42.023701 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:00:42.029451 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:00:42.035744 systemd[1]: Startup finished in 689ms (kernel) + 15.120s (initrd) + 20.393s (userspace) = 36.202s. Sep 13 00:00:42.449107 kubelet[1826]: E0913 00:00:42.449060 1826 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:00:42.452095 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:00:42.452237 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:00:42.846942 login[1828]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:00:42.853887 login[1829]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:00:42.858674 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:00:42.864279 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:00:42.866528 systemd-logind[1688]: New session 1 of user core. Sep 13 00:00:42.870189 systemd-logind[1688]: New session 2 of user core. Sep 13 00:00:42.891318 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:00:42.904353 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:00:42.942953 (systemd)[1843]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:00:43.277765 systemd[1843]: Queued start job for default target default.target. Sep 13 00:00:43.291500 systemd[1843]: Created slice app.slice - User Application Slice. Sep 13 00:00:43.291532 systemd[1843]: Reached target paths.target - Paths. Sep 13 00:00:43.291544 systemd[1843]: Reached target timers.target - Timers. Sep 13 00:00:43.292720 systemd[1843]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:00:43.304040 systemd[1843]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:00:43.304263 systemd[1843]: Reached target sockets.target - Sockets. Sep 13 00:00:43.304369 systemd[1843]: Reached target basic.target - Basic System. Sep 13 00:00:43.304472 systemd[1843]: Reached target default.target - Main User Target. Sep 13 00:00:43.304592 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:00:43.305596 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:00:43.305869 systemd[1843]: Startup finished in 356ms. Sep 13 00:00:43.308069 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:00:44.025442 waagent[1822]: 2025-09-13T00:00:44.025352Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Sep 13 00:00:44.031242 waagent[1822]: 2025-09-13T00:00:44.031179Z INFO Daemon Daemon OS: flatcar 4081.3.5 Sep 13 00:00:44.036083 waagent[1822]: 2025-09-13T00:00:44.036034Z INFO Daemon Daemon Python: 3.11.9 Sep 13 00:00:44.040606 waagent[1822]: 2025-09-13T00:00:44.040426Z INFO Daemon Daemon Run daemon Sep 13 00:00:44.044649 waagent[1822]: 2025-09-13T00:00:44.044602Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.5' Sep 13 00:00:44.053813 waagent[1822]: 2025-09-13T00:00:44.053755Z INFO Daemon Daemon Using waagent for provisioning Sep 13 00:00:44.059175 waagent[1822]: 2025-09-13T00:00:44.059133Z INFO Daemon Daemon Activate resource disk Sep 13 00:00:44.064301 waagent[1822]: 2025-09-13T00:00:44.064255Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 13 00:00:44.075761 waagent[1822]: 2025-09-13T00:00:44.075701Z INFO Daemon Daemon Found device: None Sep 13 00:00:44.080298 waagent[1822]: 2025-09-13T00:00:44.080251Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 13 00:00:44.088719 waagent[1822]: 2025-09-13T00:00:44.088670Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 13 00:00:44.101528 waagent[1822]: 2025-09-13T00:00:44.101476Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 13 00:00:44.107342 waagent[1822]: 2025-09-13T00:00:44.107296Z INFO Daemon Daemon Running default provisioning handler Sep 13 00:00:44.119172 waagent[1822]: 2025-09-13T00:00:44.118630Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 13 00:00:44.132824 waagent[1822]: 2025-09-13T00:00:44.132762Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 13 00:00:44.142976 waagent[1822]: 2025-09-13T00:00:44.142920Z INFO Daemon Daemon cloud-init is enabled: False Sep 13 00:00:44.148126 waagent[1822]: 2025-09-13T00:00:44.148078Z INFO Daemon Daemon Copying ovf-env.xml Sep 13 00:00:44.261434 waagent[1822]: 2025-09-13T00:00:44.261333Z INFO Daemon Daemon Successfully mounted dvd Sep 13 00:00:44.291830 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 13 00:00:44.293850 waagent[1822]: 2025-09-13T00:00:44.293765Z INFO Daemon Daemon Detect protocol endpoint Sep 13 00:00:44.299193 waagent[1822]: 2025-09-13T00:00:44.299136Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 13 00:00:44.305013 waagent[1822]: 2025-09-13T00:00:44.304959Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 13 00:00:44.311561 waagent[1822]: 2025-09-13T00:00:44.311509Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 13 00:00:44.317062 waagent[1822]: 2025-09-13T00:00:44.316997Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 13 00:00:44.322122 waagent[1822]: 2025-09-13T00:00:44.322070Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 13 00:00:44.372491 waagent[1822]: 2025-09-13T00:00:44.372444Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 13 00:00:44.379661 waagent[1822]: 2025-09-13T00:00:44.379633Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 13 00:00:44.385358 waagent[1822]: 2025-09-13T00:00:44.385300Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 13 00:00:44.738793 waagent[1822]: 2025-09-13T00:00:44.738682Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 13 00:00:44.747000 waagent[1822]: 2025-09-13T00:00:44.746925Z INFO Daemon Daemon Forcing an update of the goal state. Sep 13 00:00:44.757421 waagent[1822]: 2025-09-13T00:00:44.757363Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 13 00:00:44.780277 waagent[1822]: 2025-09-13T00:00:44.780229Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 13 00:00:44.786438 waagent[1822]: 2025-09-13T00:00:44.786386Z INFO Daemon Sep 13 00:00:44.790635 waagent[1822]: 2025-09-13T00:00:44.790579Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: b9522ba0-9cf8-49d4-8806-1dca67b9cff5 eTag: 8381856790975015529 source: Fabric] Sep 13 00:00:44.803702 waagent[1822]: 2025-09-13T00:00:44.803651Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 13 00:00:44.811261 waagent[1822]: 2025-09-13T00:00:44.811213Z INFO Daemon Sep 13 00:00:44.814264 waagent[1822]: 2025-09-13T00:00:44.814221Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 13 00:00:44.825850 waagent[1822]: 2025-09-13T00:00:44.825811Z INFO Daemon Daemon Downloading artifacts profile blob Sep 13 00:00:44.904739 waagent[1822]: 2025-09-13T00:00:44.904645Z INFO Daemon Downloaded certificate {'thumbprint': '05732131DDE1B3ABB7375731753A102C04174B84', 'hasPrivateKey': True} Sep 13 00:00:44.914894 waagent[1822]: 2025-09-13T00:00:44.914837Z INFO Daemon Fetch goal state completed Sep 13 00:00:44.926190 waagent[1822]: 2025-09-13T00:00:44.926146Z INFO Daemon Daemon Starting provisioning Sep 13 00:00:44.932565 waagent[1822]: 2025-09-13T00:00:44.932491Z INFO Daemon Daemon Handle ovf-env.xml. Sep 13 00:00:44.937637 waagent[1822]: 2025-09-13T00:00:44.937586Z INFO Daemon Daemon Set hostname [ci-4081.3.5-n-a13ccab244] Sep 13 00:00:44.982041 waagent[1822]: 2025-09-13T00:00:44.976997Z INFO Daemon Daemon Publish hostname [ci-4081.3.5-n-a13ccab244] Sep 13 00:00:44.984697 waagent[1822]: 2025-09-13T00:00:44.984633Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 13 00:00:44.991831 waagent[1822]: 2025-09-13T00:00:44.991743Z INFO Daemon Daemon Primary interface is [eth0] Sep 13 00:00:45.083898 systemd-networkd[1567]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:00:45.083905 systemd-networkd[1567]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:00:45.083950 systemd-networkd[1567]: eth0: DHCP lease lost Sep 13 00:00:45.089040 waagent[1822]: 2025-09-13T00:00:45.085041Z INFO Daemon Daemon Create user account if not exists Sep 13 00:00:45.091972 waagent[1822]: 2025-09-13T00:00:45.091913Z INFO Daemon Daemon User core already exists, skip useradd Sep 13 00:00:45.095095 systemd-networkd[1567]: eth0: DHCPv6 lease lost Sep 13 00:00:45.098744 waagent[1822]: 2025-09-13T00:00:45.098678Z INFO Daemon Daemon Configure sudoer Sep 13 00:00:45.104133 waagent[1822]: 2025-09-13T00:00:45.104068Z INFO Daemon Daemon Configure sshd Sep 13 00:00:45.109117 waagent[1822]: 2025-09-13T00:00:45.109068Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 13 00:00:45.123070 waagent[1822]: 2025-09-13T00:00:45.122679Z INFO Daemon Daemon Deploy ssh public key. Sep 13 00:00:45.139073 systemd-networkd[1567]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 13 00:00:46.296045 waagent[1822]: 2025-09-13T00:00:46.292127Z INFO Daemon Daemon Provisioning complete Sep 13 00:00:46.311948 waagent[1822]: 2025-09-13T00:00:46.311899Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 13 00:00:46.318935 waagent[1822]: 2025-09-13T00:00:46.318874Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 13 00:00:46.329240 waagent[1822]: 2025-09-13T00:00:46.329183Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Sep 13 00:00:46.459852 waagent[1895]: 2025-09-13T00:00:46.459223Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Sep 13 00:00:46.459852 waagent[1895]: 2025-09-13T00:00:46.459368Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.5 Sep 13 00:00:46.459852 waagent[1895]: 2025-09-13T00:00:46.459419Z INFO ExtHandler ExtHandler Python: 3.11.9 Sep 13 00:00:46.636647 waagent[1895]: 2025-09-13T00:00:46.636511Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.5; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 13 00:00:46.636955 waagent[1895]: 2025-09-13T00:00:46.636918Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 00:00:46.637106 waagent[1895]: 2025-09-13T00:00:46.637072Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 00:00:46.645663 waagent[1895]: 2025-09-13T00:00:46.645596Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 13 00:00:46.651454 waagent[1895]: 2025-09-13T00:00:46.651409Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 13 00:00:46.652057 waagent[1895]: 2025-09-13T00:00:46.652000Z INFO ExtHandler Sep 13 00:00:46.652206 waagent[1895]: 2025-09-13T00:00:46.652173Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 8ae78d12-a996-4c2c-b2ea-8b03ac941430 eTag: 8381856790975015529 source: Fabric] Sep 13 00:00:46.653043 waagent[1895]: 2025-09-13T00:00:46.652550Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 13 00:00:46.653227 waagent[1895]: 2025-09-13T00:00:46.653186Z INFO ExtHandler Sep 13 00:00:46.653349 waagent[1895]: 2025-09-13T00:00:46.653320Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 13 00:00:46.657461 waagent[1895]: 2025-09-13T00:00:46.657431Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 13 00:00:46.732754 waagent[1895]: 2025-09-13T00:00:46.732675Z INFO ExtHandler Downloaded certificate {'thumbprint': '05732131DDE1B3ABB7375731753A102C04174B84', 'hasPrivateKey': True} Sep 13 00:00:46.733420 waagent[1895]: 2025-09-13T00:00:46.733378Z INFO ExtHandler Fetch goal state completed Sep 13 00:00:46.750056 waagent[1895]: 2025-09-13T00:00:46.749978Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1895 Sep 13 00:00:46.751055 waagent[1895]: 2025-09-13T00:00:46.750307Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 13 00:00:46.751921 waagent[1895]: 2025-09-13T00:00:46.751873Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.5', '', 'Flatcar Container Linux by Kinvolk'] Sep 13 00:00:46.752305 waagent[1895]: 2025-09-13T00:00:46.752265Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 13 00:00:46.812136 waagent[1895]: 2025-09-13T00:00:46.812090Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 13 00:00:46.812324 waagent[1895]: 2025-09-13T00:00:46.812286Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 13 00:00:46.818437 waagent[1895]: 2025-09-13T00:00:46.818397Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 13 00:00:46.824822 systemd[1]: Reloading requested from client PID 1908 ('systemctl') (unit waagent.service)... Sep 13 00:00:46.824833 systemd[1]: Reloading... Sep 13 00:00:46.911071 zram_generator::config[1943]: No configuration found. Sep 13 00:00:47.016794 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:00:47.094827 systemd[1]: Reloading finished in 269 ms. Sep 13 00:00:47.118791 waagent[1895]: 2025-09-13T00:00:47.118659Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Sep 13 00:00:47.124127 systemd[1]: Reloading requested from client PID 1998 ('systemctl') (unit waagent.service)... Sep 13 00:00:47.124145 systemd[1]: Reloading... Sep 13 00:00:47.207091 zram_generator::config[2035]: No configuration found. Sep 13 00:00:47.304492 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:00:47.379096 systemd[1]: Reloading finished in 254 ms. Sep 13 00:00:47.402954 waagent[1895]: 2025-09-13T00:00:47.402827Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 13 00:00:47.403075 waagent[1895]: 2025-09-13T00:00:47.402992Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 13 00:00:47.925974 waagent[1895]: 2025-09-13T00:00:47.925883Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 13 00:00:47.926541 waagent[1895]: 2025-09-13T00:00:47.926487Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Sep 13 00:00:47.927301 waagent[1895]: 2025-09-13T00:00:47.927216Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 13 00:00:47.927729 waagent[1895]: 2025-09-13T00:00:47.927637Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 13 00:00:47.928678 waagent[1895]: 2025-09-13T00:00:47.927941Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 00:00:47.928678 waagent[1895]: 2025-09-13T00:00:47.928047Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 00:00:47.928678 waagent[1895]: 2025-09-13T00:00:47.928253Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 13 00:00:47.928678 waagent[1895]: 2025-09-13T00:00:47.928419Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 13 00:00:47.928678 waagent[1895]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 13 00:00:47.928678 waagent[1895]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 13 00:00:47.928678 waagent[1895]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 13 00:00:47.928678 waagent[1895]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 13 00:00:47.928678 waagent[1895]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 00:00:47.928678 waagent[1895]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 13 00:00:47.929074 waagent[1895]: 2025-09-13T00:00:47.928989Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 13 00:00:47.929235 waagent[1895]: 2025-09-13T00:00:47.929189Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 13 00:00:47.929292 waagent[1895]: 2025-09-13T00:00:47.929243Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 13 00:00:47.929727 waagent[1895]: 2025-09-13T00:00:47.929671Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 13 00:00:47.929911 waagent[1895]: 2025-09-13T00:00:47.929866Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 13 00:00:47.930500 waagent[1895]: 2025-09-13T00:00:47.930451Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 13 00:00:47.930621 waagent[1895]: 2025-09-13T00:00:47.930589Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 13 00:00:47.930833 waagent[1895]: 2025-09-13T00:00:47.930791Z INFO EnvHandler ExtHandler Configure routes Sep 13 00:00:47.931279 waagent[1895]: 2025-09-13T00:00:47.931242Z INFO EnvHandler ExtHandler Gateway:None Sep 13 00:00:47.934140 waagent[1895]: 2025-09-13T00:00:47.934090Z INFO EnvHandler ExtHandler Routes:None Sep 13 00:00:47.936195 waagent[1895]: 2025-09-13T00:00:47.936072Z INFO ExtHandler ExtHandler Sep 13 00:00:47.936401 waagent[1895]: 2025-09-13T00:00:47.936347Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: a538206d-4ead-47ff-b058-0ab3eae88f6d correlation e3eeb03f-81c3-48cc-b7d4-96dcbef5d71a created: 2025-09-12T23:59:14.496521Z] Sep 13 00:00:47.937525 waagent[1895]: 2025-09-13T00:00:47.937478Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 13 00:00:47.939974 waagent[1895]: 2025-09-13T00:00:47.939191Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Sep 13 00:00:47.978878 waagent[1895]: 2025-09-13T00:00:47.978743Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: C5F7F91E-DADC-4210-9EA6-A4560453A465;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Sep 13 00:00:48.124949 waagent[1895]: 2025-09-13T00:00:48.124501Z INFO MonitorHandler ExtHandler Network interfaces: Sep 13 00:00:48.124949 waagent[1895]: Executing ['ip', '-a', '-o', 'link']: Sep 13 00:00:48.124949 waagent[1895]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 13 00:00:48.124949 waagent[1895]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c3:53:0c brd ff:ff:ff:ff:ff:ff Sep 13 00:00:48.124949 waagent[1895]: 3: enP52926s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c3:53:0c brd ff:ff:ff:ff:ff:ff\ altname enP52926p0s2 Sep 13 00:00:48.124949 waagent[1895]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 13 00:00:48.124949 waagent[1895]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 13 00:00:48.124949 waagent[1895]: 2: eth0 inet 10.200.20.14/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 13 00:00:48.124949 waagent[1895]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 13 00:00:48.124949 waagent[1895]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 13 00:00:48.124949 waagent[1895]: 2: eth0 inet6 fe80::20d:3aff:fec3:530c/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 13 00:00:48.172508 waagent[1895]: 2025-09-13T00:00:48.172444Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Sep 13 00:00:48.172508 waagent[1895]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:00:48.172508 waagent[1895]: pkts bytes target prot opt in out source destination Sep 13 00:00:48.172508 waagent[1895]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:00:48.172508 waagent[1895]: pkts bytes target prot opt in out source destination Sep 13 00:00:48.172508 waagent[1895]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:00:48.172508 waagent[1895]: pkts bytes target prot opt in out source destination Sep 13 00:00:48.172508 waagent[1895]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 13 00:00:48.172508 waagent[1895]: 6 509 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 13 00:00:48.172508 waagent[1895]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 13 00:00:48.175833 waagent[1895]: 2025-09-13T00:00:48.175773Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 13 00:00:48.175833 waagent[1895]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:00:48.175833 waagent[1895]: pkts bytes target prot opt in out source destination Sep 13 00:00:48.175833 waagent[1895]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:00:48.175833 waagent[1895]: pkts bytes target prot opt in out source destination Sep 13 00:00:48.175833 waagent[1895]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 13 00:00:48.175833 waagent[1895]: pkts bytes target prot opt in out source destination Sep 13 00:00:48.175833 waagent[1895]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 13 00:00:48.175833 waagent[1895]: 10 1102 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 13 00:00:48.175833 waagent[1895]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 13 00:00:48.176111 waagent[1895]: 2025-09-13T00:00:48.176041Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 13 00:00:52.676143 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:00:52.684192 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:00:52.777215 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:00:52.781039 (kubelet)[2125]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:00:52.901400 kubelet[2125]: E0913 00:00:52.901346 2125 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:00:52.903744 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:00:52.903864 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:01:01.431516 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:01:01.432999 systemd[1]: Started sshd@0-10.200.20.14:22-10.200.16.10:33588.service - OpenSSH per-connection server daemon (10.200.16.10:33588). Sep 13 00:01:01.968401 sshd[2132]: Accepted publickey for core from 10.200.16.10 port 33588 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:01:01.969842 sshd[2132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:01.974447 systemd-logind[1688]: New session 3 of user core. Sep 13 00:01:01.979200 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:01:02.358514 systemd[1]: Started sshd@1-10.200.20.14:22-10.200.16.10:33596.service - OpenSSH per-connection server daemon (10.200.16.10:33596). Sep 13 00:01:02.770199 sshd[2137]: Accepted publickey for core from 10.200.16.10 port 33596 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:01:02.771529 sshd[2137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:02.776119 systemd-logind[1688]: New session 4 of user core. Sep 13 00:01:02.783218 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:01:02.925964 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:01:02.935429 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:01:03.031097 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:01:03.035489 (kubelet)[2149]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:01:03.079222 sshd[2137]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:03.081850 systemd-logind[1688]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:01:03.082320 systemd[1]: sshd@1-10.200.20.14:22-10.200.16.10:33596.service: Deactivated successfully. Sep 13 00:01:03.083800 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:01:03.085680 systemd-logind[1688]: Removed session 4. Sep 13 00:01:03.160320 kubelet[2149]: E0913 00:01:03.159215 2149 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:01:03.159601 systemd[1]: Started sshd@2-10.200.20.14:22-10.200.16.10:33598.service - OpenSSH per-connection server daemon (10.200.16.10:33598). Sep 13 00:01:03.163415 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:01:03.163664 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:01:03.565975 sshd[2160]: Accepted publickey for core from 10.200.16.10 port 33598 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:01:03.567251 sshd[2160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:03.570974 systemd-logind[1688]: New session 5 of user core. Sep 13 00:01:03.578212 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:01:03.881730 sshd[2160]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:03.885531 systemd-logind[1688]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:01:03.886443 systemd[1]: sshd@2-10.200.20.14:22-10.200.16.10:33598.service: Deactivated successfully. Sep 13 00:01:03.888479 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:01:03.890445 systemd-logind[1688]: Removed session 5. Sep 13 00:01:03.956950 systemd[1]: Started sshd@3-10.200.20.14:22-10.200.16.10:33612.service - OpenSSH per-connection server daemon (10.200.16.10:33612). Sep 13 00:01:04.368971 sshd[2168]: Accepted publickey for core from 10.200.16.10 port 33612 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:01:04.370279 sshd[2168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:04.374791 systemd-logind[1688]: New session 6 of user core. Sep 13 00:01:04.385230 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:01:04.480149 chronyd[1679]: Selected source PHC0 Sep 13 00:01:04.684773 sshd[2168]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:04.687911 systemd[1]: sshd@3-10.200.20.14:22-10.200.16.10:33612.service: Deactivated successfully. Sep 13 00:01:04.689410 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:01:04.689975 systemd-logind[1688]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:01:04.690912 systemd-logind[1688]: Removed session 6. Sep 13 00:01:04.759295 systemd[1]: Started sshd@4-10.200.20.14:22-10.200.16.10:33622.service - OpenSSH per-connection server daemon (10.200.16.10:33622). Sep 13 00:01:05.175501 sshd[2175]: Accepted publickey for core from 10.200.16.10 port 33622 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:01:05.176767 sshd[2175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:05.180516 systemd-logind[1688]: New session 7 of user core. Sep 13 00:01:05.187222 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:01:05.660585 sudo[2178]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:01:05.660884 sudo[2178]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:01:05.692898 sudo[2178]: pam_unix(sudo:session): session closed for user root Sep 13 00:01:05.797905 sshd[2175]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:05.801718 systemd[1]: sshd@4-10.200.20.14:22-10.200.16.10:33622.service: Deactivated successfully. Sep 13 00:01:05.803360 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:01:05.805717 systemd-logind[1688]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:01:05.806746 systemd-logind[1688]: Removed session 7. Sep 13 00:01:05.878248 systemd[1]: Started sshd@5-10.200.20.14:22-10.200.16.10:33638.service - OpenSSH per-connection server daemon (10.200.16.10:33638). Sep 13 00:01:06.287349 sshd[2183]: Accepted publickey for core from 10.200.16.10 port 33638 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:01:06.288718 sshd[2183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:06.292991 systemd-logind[1688]: New session 8 of user core. Sep 13 00:01:06.298185 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:01:06.524377 sudo[2187]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:01:06.524650 sudo[2187]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:01:06.527724 sudo[2187]: pam_unix(sudo:session): session closed for user root Sep 13 00:01:06.532522 sudo[2186]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:01:06.532789 sudo[2186]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:01:06.545366 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:01:06.547767 auditctl[2190]: No rules Sep 13 00:01:06.548095 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:01:06.548265 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:01:06.551188 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:01:06.580620 augenrules[2208]: No rules Sep 13 00:01:06.582096 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:01:06.583587 sudo[2186]: pam_unix(sudo:session): session closed for user root Sep 13 00:01:06.685867 sshd[2183]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:06.688399 systemd-logind[1688]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:01:06.689966 systemd[1]: sshd@5-10.200.20.14:22-10.200.16.10:33638.service: Deactivated successfully. Sep 13 00:01:06.692350 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:01:06.693336 systemd-logind[1688]: Removed session 8. Sep 13 00:01:06.760290 systemd[1]: Started sshd@6-10.200.20.14:22-10.200.16.10:33640.service - OpenSSH per-connection server daemon (10.200.16.10:33640). Sep 13 00:01:07.172917 sshd[2216]: Accepted publickey for core from 10.200.16.10 port 33640 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:01:07.174180 sshd[2216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:07.179107 systemd-logind[1688]: New session 9 of user core. Sep 13 00:01:07.186221 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:01:07.411697 sudo[2219]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:01:07.411960 sudo[2219]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:01:08.593273 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:01:08.593361 (dockerd)[2235]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:01:09.378827 dockerd[2235]: time="2025-09-13T00:01:09.378771868Z" level=info msg="Starting up" Sep 13 00:01:09.921419 dockerd[2235]: time="2025-09-13T00:01:09.921375632Z" level=info msg="Loading containers: start." Sep 13 00:01:10.170039 kernel: Initializing XFRM netlink socket Sep 13 00:01:10.378819 systemd-networkd[1567]: docker0: Link UP Sep 13 00:01:10.415163 dockerd[2235]: time="2025-09-13T00:01:10.415124003Z" level=info msg="Loading containers: done." Sep 13 00:01:10.426934 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck87621743-merged.mount: Deactivated successfully. Sep 13 00:01:10.440707 dockerd[2235]: time="2025-09-13T00:01:10.440658979Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:01:10.440882 dockerd[2235]: time="2025-09-13T00:01:10.440766139Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:01:10.440882 dockerd[2235]: time="2025-09-13T00:01:10.440870179Z" level=info msg="Daemon has completed initialization" Sep 13 00:01:10.527724 dockerd[2235]: time="2025-09-13T00:01:10.527616416Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:01:10.528140 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:01:11.314153 containerd[1714]: time="2025-09-13T00:01:11.314113748Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:01:12.237774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2603175514.mount: Deactivated successfully. Sep 13 00:01:13.176770 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 00:01:13.182281 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:01:13.292180 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:01:13.292822 (kubelet)[2436]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:01:13.329114 kubelet[2436]: E0913 00:01:13.329035 2436 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:01:13.331737 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:01:13.331864 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:01:13.764092 containerd[1714]: time="2025-09-13T00:01:13.764040566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:13.768579 containerd[1714]: time="2025-09-13T00:01:13.768523764Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=25687325" Sep 13 00:01:13.772100 containerd[1714]: time="2025-09-13T00:01:13.772058483Z" level=info msg="ImageCreate event name:\"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:13.777683 containerd[1714]: time="2025-09-13T00:01:13.777639681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:13.778981 containerd[1714]: time="2025-09-13T00:01:13.778722441Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"25683924\" in 2.464566893s" Sep 13 00:01:13.778981 containerd[1714]: time="2025-09-13T00:01:13.778756441Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 13 00:01:13.780153 containerd[1714]: time="2025-09-13T00:01:13.780124761Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:01:15.061918 containerd[1714]: time="2025-09-13T00:01:15.061858735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:15.066959 containerd[1714]: time="2025-09-13T00:01:15.066743653Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=22459767" Sep 13 00:01:15.074433 containerd[1714]: time="2025-09-13T00:01:15.074373010Z" level=info msg="ImageCreate event name:\"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:15.080824 containerd[1714]: time="2025-09-13T00:01:15.080730248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:15.082839 containerd[1714]: time="2025-09-13T00:01:15.082045968Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"24028542\" in 1.301887927s" Sep 13 00:01:15.082839 containerd[1714]: time="2025-09-13T00:01:15.082085488Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 13 00:01:15.083271 containerd[1714]: time="2025-09-13T00:01:15.083251807Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:01:16.261051 containerd[1714]: time="2025-09-13T00:01:16.260630176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:16.264856 containerd[1714]: time="2025-09-13T00:01:16.264822175Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=17127506" Sep 13 00:01:16.269241 containerd[1714]: time="2025-09-13T00:01:16.269212653Z" level=info msg="ImageCreate event name:\"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:16.275072 containerd[1714]: time="2025-09-13T00:01:16.275044811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:16.276241 containerd[1714]: time="2025-09-13T00:01:16.275728371Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"18696299\" in 1.192298124s" Sep 13 00:01:16.276241 containerd[1714]: time="2025-09-13T00:01:16.275759811Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 13 00:01:16.276350 containerd[1714]: time="2025-09-13T00:01:16.276337531Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:01:17.791008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount488820694.mount: Deactivated successfully. Sep 13 00:01:18.138047 containerd[1714]: time="2025-09-13T00:01:18.137905187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:18.142080 containerd[1714]: time="2025-09-13T00:01:18.142044904Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=26954907" Sep 13 00:01:18.146399 containerd[1714]: time="2025-09-13T00:01:18.146350620Z" level=info msg="ImageCreate event name:\"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:18.156300 containerd[1714]: time="2025-09-13T00:01:18.156251212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:18.157334 containerd[1714]: time="2025-09-13T00:01:18.156783011Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"26953926\" in 1.88041892s" Sep 13 00:01:18.157334 containerd[1714]: time="2025-09-13T00:01:18.156819411Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 13 00:01:18.157334 containerd[1714]: time="2025-09-13T00:01:18.157242651Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:01:18.847996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1621685306.mount: Deactivated successfully. Sep 13 00:01:19.926528 containerd[1714]: time="2025-09-13T00:01:19.926467750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:19.929921 containerd[1714]: time="2025-09-13T00:01:19.929893347Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 13 00:01:19.935555 containerd[1714]: time="2025-09-13T00:01:19.934942103Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:19.945377 containerd[1714]: time="2025-09-13T00:01:19.945341695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:19.946560 containerd[1714]: time="2025-09-13T00:01:19.946532254Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.789261403s" Sep 13 00:01:19.946657 containerd[1714]: time="2025-09-13T00:01:19.946640054Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 13 00:01:19.947267 containerd[1714]: time="2025-09-13T00:01:19.947153973Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:01:20.504716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount80777661.mount: Deactivated successfully. Sep 13 00:01:20.530835 containerd[1714]: time="2025-09-13T00:01:20.530059572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:20.534861 containerd[1714]: time="2025-09-13T00:01:20.534832608Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 13 00:01:20.539146 containerd[1714]: time="2025-09-13T00:01:20.539107525Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:20.543908 containerd[1714]: time="2025-09-13T00:01:20.543866041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:20.544577 containerd[1714]: time="2025-09-13T00:01:20.544543760Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 597.358707ms" Sep 13 00:01:20.544577 containerd[1714]: time="2025-09-13T00:01:20.544575480Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 13 00:01:20.545110 containerd[1714]: time="2025-09-13T00:01:20.545080520Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:01:20.990901 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Sep 13 00:01:21.237678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3748117534.mount: Deactivated successfully. Sep 13 00:01:23.426268 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 13 00:01:23.432220 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:01:23.549518 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:01:23.552719 (kubelet)[2575]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:01:23.666262 kubelet[2575]: E0913 00:01:23.666179 2575 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:01:23.669793 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:01:23.669927 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:01:24.156055 containerd[1714]: time="2025-09-13T00:01:24.155125022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:24.159000 containerd[1714]: time="2025-09-13T00:01:24.158968779Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537161" Sep 13 00:01:24.165168 containerd[1714]: time="2025-09-13T00:01:24.165139974Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:24.172256 containerd[1714]: time="2025-09-13T00:01:24.171834368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:24.173031 containerd[1714]: time="2025-09-13T00:01:24.172982728Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.627868328s" Sep 13 00:01:24.173089 containerd[1714]: time="2025-09-13T00:01:24.173032768Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 13 00:01:26.846369 update_engine[1689]: I20250913 00:01:26.845741 1689 update_attempter.cc:509] Updating boot flags... Sep 13 00:01:27.041099 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2615) Sep 13 00:01:28.696295 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:01:28.703279 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:01:28.731054 systemd[1]: Reloading requested from client PID 2650 ('systemctl') (unit session-9.scope)... Sep 13 00:01:28.731075 systemd[1]: Reloading... Sep 13 00:01:28.833186 zram_generator::config[2688]: No configuration found. Sep 13 00:01:28.937204 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:01:29.014673 systemd[1]: Reloading finished in 283 ms. Sep 13 00:01:29.059727 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:01:29.063159 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:01:29.063368 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:01:29.068244 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:01:29.770293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:01:29.775182 (kubelet)[2760]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:01:29.809377 kubelet[2760]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:01:29.809377 kubelet[2760]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:01:29.809377 kubelet[2760]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:01:29.809708 kubelet[2760]: I0913 00:01:29.809416 2760 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:01:30.843845 kubelet[2760]: I0913 00:01:30.843805 2760 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:01:30.843845 kubelet[2760]: I0913 00:01:30.843837 2760 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:01:30.844238 kubelet[2760]: I0913 00:01:30.844082 2760 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:01:30.868046 kubelet[2760]: E0913 00:01:30.866899 2760 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:01:30.870965 kubelet[2760]: I0913 00:01:30.870724 2760 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:01:30.876351 kubelet[2760]: E0913 00:01:30.876321 2760 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:01:30.876351 kubelet[2760]: I0913 00:01:30.876349 2760 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:01:30.879968 kubelet[2760]: I0913 00:01:30.879945 2760 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:01:30.880632 kubelet[2760]: I0913 00:01:30.880613 2760 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:01:30.880765 kubelet[2760]: I0913 00:01:30.880740 2760 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:01:30.880942 kubelet[2760]: I0913 00:01:30.880766 2760 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-n-a13ccab244","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:01:30.881040 kubelet[2760]: I0913 00:01:30.880948 2760 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:01:30.881040 kubelet[2760]: I0913 00:01:30.880957 2760 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:01:30.881106 kubelet[2760]: I0913 00:01:30.881088 2760 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:01:30.883306 kubelet[2760]: I0913 00:01:30.883287 2760 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:01:30.883333 kubelet[2760]: I0913 00:01:30.883314 2760 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:01:30.883356 kubelet[2760]: I0913 00:01:30.883335 2760 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:01:30.883356 kubelet[2760]: I0913 00:01:30.883347 2760 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:01:30.888997 kubelet[2760]: W0913 00:01:30.888945 2760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-a13ccab244&limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Sep 13 00:01:30.889060 kubelet[2760]: E0913 00:01:30.889002 2760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-a13ccab244&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:01:30.890753 kubelet[2760]: I0913 00:01:30.890118 2760 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:01:30.890753 kubelet[2760]: I0913 00:01:30.890571 2760 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:01:30.890753 kubelet[2760]: W0913 00:01:30.890614 2760 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:01:30.891156 kubelet[2760]: I0913 00:01:30.891129 2760 server.go:1274] "Started kubelet" Sep 13 00:01:30.894010 kubelet[2760]: I0913 00:01:30.893379 2760 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:01:30.895433 kubelet[2760]: W0913 00:01:30.895393 2760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Sep 13 00:01:30.895552 kubelet[2760]: E0913 00:01:30.895533 2760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:01:30.896944 kubelet[2760]: I0913 00:01:30.896631 2760 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:01:30.898517 kubelet[2760]: I0913 00:01:30.898496 2760 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:01:30.898708 kubelet[2760]: E0913 00:01:30.898669 2760 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-a13ccab244\" not found" Sep 13 00:01:30.898708 kubelet[2760]: I0913 00:01:30.898538 2760 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:01:30.900993 kubelet[2760]: I0913 00:01:30.898528 2760 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:01:30.901268 kubelet[2760]: I0913 00:01:30.901220 2760 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:01:30.901505 kubelet[2760]: I0913 00:01:30.901487 2760 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:01:30.905757 kubelet[2760]: I0913 00:01:30.905733 2760 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:01:30.905876 kubelet[2760]: W0913 00:01:30.905830 2760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Sep 13 00:01:30.905912 kubelet[2760]: E0913 00:01:30.905881 2760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:01:30.905974 kubelet[2760]: E0913 00:01:30.905943 2760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-a13ccab244?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="200ms" Sep 13 00:01:30.907102 kubelet[2760]: I0913 00:01:30.907083 2760 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:01:30.907278 kubelet[2760]: I0913 00:01:30.907261 2760 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:01:30.907559 kubelet[2760]: E0913 00:01:30.906472 2760 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.14:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.14:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.5-n-a13ccab244.1864ae90a79038f4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-n-a13ccab244,UID:ci-4081.3.5-n-a13ccab244,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-n-a13ccab244,},FirstTimestamp:2025-09-13 00:01:30.891106548 +0000 UTC m=+1.113203470,LastTimestamp:2025-09-13 00:01:30.891106548 +0000 UTC m=+1.113203470,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-n-a13ccab244,}" Sep 13 00:01:30.907980 kubelet[2760]: I0913 00:01:30.907947 2760 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:01:30.908817 kubelet[2760]: E0913 00:01:30.908767 2760 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:01:30.910095 kubelet[2760]: I0913 00:01:30.910068 2760 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:01:30.936150 kubelet[2760]: I0913 00:01:30.936093 2760 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:01:30.936150 kubelet[2760]: I0913 00:01:30.936109 2760 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:01:30.936150 kubelet[2760]: I0913 00:01:30.936127 2760 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:01:30.943837 kubelet[2760]: I0913 00:01:30.943728 2760 policy_none.go:49] "None policy: Start" Sep 13 00:01:30.944371 kubelet[2760]: I0913 00:01:30.944302 2760 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:01:30.944371 kubelet[2760]: I0913 00:01:30.944329 2760 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:01:30.954328 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 00:01:30.968694 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 00:01:30.972224 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 00:01:30.980653 kubelet[2760]: I0913 00:01:30.980633 2760 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:01:30.981299 kubelet[2760]: I0913 00:01:30.981284 2760 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:01:30.981411 kubelet[2760]: I0913 00:01:30.981371 2760 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:01:30.981859 kubelet[2760]: I0913 00:01:30.981846 2760 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:01:30.983407 kubelet[2760]: E0913 00:01:30.983385 2760 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.5-n-a13ccab244\" not found" Sep 13 00:01:30.990173 kubelet[2760]: I0913 00:01:30.990130 2760 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:01:30.991316 kubelet[2760]: I0913 00:01:30.991178 2760 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:01:30.991316 kubelet[2760]: I0913 00:01:30.991204 2760 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:01:30.991316 kubelet[2760]: I0913 00:01:30.991222 2760 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:01:30.991316 kubelet[2760]: E0913 00:01:30.991262 2760 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 13 00:01:30.993579 kubelet[2760]: W0913 00:01:30.993439 2760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Sep 13 00:01:30.993579 kubelet[2760]: E0913 00:01:30.993494 2760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:01:31.083050 kubelet[2760]: I0913 00:01:31.082987 2760 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-a13ccab244" Sep 13 00:01:31.083418 kubelet[2760]: E0913 00:01:31.083391 2760 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4081.3.5-n-a13ccab244" Sep 13 00:01:31.102258 systemd[1]: Created slice kubepods-burstable-podd6801487393aeb0193e0ed47d5b3d7a0.slice - libcontainer container kubepods-burstable-podd6801487393aeb0193e0ed47d5b3d7a0.slice. Sep 13 00:01:31.106920 kubelet[2760]: E0913 00:01:31.106889 2760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-a13ccab244?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="400ms" Sep 13 00:01:31.109160 kubelet[2760]: I0913 00:01:31.108924 2760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e6ca955e58918c35bb9deaecc7c1601-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-n-a13ccab244\" (UID: \"3e6ca955e58918c35bb9deaecc7c1601\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-a13ccab244" Sep 13 00:01:31.109160 kubelet[2760]: I0913 00:01:31.108962 2760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d862ea35e3cdb569d5faf53f4ee0692-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-n-a13ccab244\" (UID: \"8d862ea35e3cdb569d5faf53f4ee0692\") " pod="kube-system/kube-scheduler-ci-4081.3.5-n-a13ccab244" Sep 13 00:01:31.109160 kubelet[2760]: I0913 00:01:31.108978 2760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d6801487393aeb0193e0ed47d5b3d7a0-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-n-a13ccab244\" (UID: \"d6801487393aeb0193e0ed47d5b3d7a0\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-a13ccab244" Sep 13 00:01:31.109160 kubelet[2760]: I0913 00:01:31.108994 2760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d6801487393aeb0193e0ed47d5b3d7a0-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-n-a13ccab244\" (UID: \"d6801487393aeb0193e0ed47d5b3d7a0\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-a13ccab244" Sep 13 00:01:31.109160 kubelet[2760]: I0913 00:01:31.109008 2760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d6801487393aeb0193e0ed47d5b3d7a0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-n-a13ccab244\" (UID: \"d6801487393aeb0193e0ed47d5b3d7a0\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-a13ccab244" Sep 13 00:01:31.109310 kubelet[2760]: I0913 00:01:31.109035 2760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e6ca955e58918c35bb9deaecc7c1601-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-a13ccab244\" (UID: \"3e6ca955e58918c35bb9deaecc7c1601\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-a13ccab244" Sep 13 00:01:31.109310 kubelet[2760]: I0913 00:01:31.109054 2760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3e6ca955e58918c35bb9deaecc7c1601-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-n-a13ccab244\" (UID: \"3e6ca955e58918c35bb9deaecc7c1601\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-a13ccab244" Sep 13 00:01:31.109310 kubelet[2760]: I0913 00:01:31.109069 2760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e6ca955e58918c35bb9deaecc7c1601-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-a13ccab244\" (UID: \"3e6ca955e58918c35bb9deaecc7c1601\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-a13ccab244" Sep 13 00:01:31.109310 kubelet[2760]: I0913 00:01:31.109084 2760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3e6ca955e58918c35bb9deaecc7c1601-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-n-a13ccab244\" (UID: \"3e6ca955e58918c35bb9deaecc7c1601\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-a13ccab244" Sep 13 00:01:31.114807 systemd[1]: Created slice kubepods-burstable-pod8d862ea35e3cdb569d5faf53f4ee0692.slice - libcontainer container kubepods-burstable-pod8d862ea35e3cdb569d5faf53f4ee0692.slice. Sep 13 00:01:31.122960 systemd[1]: Created slice kubepods-burstable-pod3e6ca955e58918c35bb9deaecc7c1601.slice - libcontainer container kubepods-burstable-pod3e6ca955e58918c35bb9deaecc7c1601.slice. Sep 13 00:01:31.285371 kubelet[2760]: I0913 00:01:31.285342 2760 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-a13ccab244" Sep 13 00:01:31.285670 kubelet[2760]: E0913 00:01:31.285644 2760 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4081.3.5-n-a13ccab244" Sep 13 00:01:31.412925 containerd[1714]: time="2025-09-13T00:01:31.412593333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-n-a13ccab244,Uid:d6801487393aeb0193e0ed47d5b3d7a0,Namespace:kube-system,Attempt:0,}" Sep 13 00:01:31.416997 containerd[1714]: time="2025-09-13T00:01:31.416960804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-n-a13ccab244,Uid:8d862ea35e3cdb569d5faf53f4ee0692,Namespace:kube-system,Attempt:0,}" Sep 13 00:01:31.425128 containerd[1714]: time="2025-09-13T00:01:31.425096668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-n-a13ccab244,Uid:3e6ca955e58918c35bb9deaecc7c1601,Namespace:kube-system,Attempt:0,}" Sep 13 00:01:31.508151 kubelet[2760]: E0913 00:01:31.508108 2760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-a13ccab244?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="800ms" Sep 13 00:01:31.687238 kubelet[2760]: I0913 00:01:31.687206 2760 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-a13ccab244" Sep 13 00:01:31.687560 kubelet[2760]: E0913 00:01:31.687530 2760 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4081.3.5-n-a13ccab244" Sep 13 00:01:31.785594 kubelet[2760]: W0913 00:01:31.785535 2760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-a13ccab244&limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Sep 13 00:01:31.785717 kubelet[2760]: E0913 00:01:31.785608 2760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-a13ccab244&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:01:31.806399 kubelet[2760]: W0913 00:01:31.806348 2760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Sep 13 00:01:31.806481 kubelet[2760]: E0913 00:01:31.806411 2760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:01:32.108452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount123818433.mount: Deactivated successfully. Sep 13 00:01:32.139395 containerd[1714]: time="2025-09-13T00:01:32.139340983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:01:32.143354 containerd[1714]: time="2025-09-13T00:01:32.143316055Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 13 00:01:32.148442 containerd[1714]: time="2025-09-13T00:01:32.148403405Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:01:32.153052 containerd[1714]: time="2025-09-13T00:01:32.152577876Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:01:32.157260 containerd[1714]: time="2025-09-13T00:01:32.157223347Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:01:32.161160 containerd[1714]: time="2025-09-13T00:01:32.161128619Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:01:32.164911 containerd[1714]: time="2025-09-13T00:01:32.164853892Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:01:32.169828 containerd[1714]: time="2025-09-13T00:01:32.169776922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:01:32.170974 containerd[1714]: time="2025-09-13T00:01:32.170559400Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 745.395172ms" Sep 13 00:01:32.172283 containerd[1714]: time="2025-09-13T00:01:32.172251717Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 759.580544ms" Sep 13 00:01:32.176340 containerd[1714]: time="2025-09-13T00:01:32.176302788Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 759.250384ms" Sep 13 00:01:32.184246 kubelet[2760]: W0913 00:01:32.184209 2760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Sep 13 00:01:32.184636 kubelet[2760]: E0913 00:01:32.184586 2760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:01:32.308911 kubelet[2760]: E0913 00:01:32.308867 2760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-a13ccab244?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="1.6s" Sep 13 00:01:32.480781 kubelet[2760]: W0913 00:01:32.480730 2760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Sep 13 00:01:32.480926 kubelet[2760]: E0913 00:01:32.480788 2760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:01:32.490275 kubelet[2760]: I0913 00:01:32.490238 2760 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-a13ccab244" Sep 13 00:01:32.490632 kubelet[2760]: E0913 00:01:32.490563 2760 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4081.3.5-n-a13ccab244" Sep 13 00:01:32.984463 kubelet[2760]: E0913 00:01:32.984419 2760 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:01:33.681402 containerd[1714]: time="2025-09-13T00:01:33.681276553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:01:33.681719 containerd[1714]: time="2025-09-13T00:01:33.681602993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:01:33.681719 containerd[1714]: time="2025-09-13T00:01:33.681656633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:33.682678 containerd[1714]: time="2025-09-13T00:01:33.682354792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:33.688536 containerd[1714]: time="2025-09-13T00:01:33.687778147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:01:33.690647 containerd[1714]: time="2025-09-13T00:01:33.687831867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:01:33.690647 containerd[1714]: time="2025-09-13T00:01:33.689452065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:33.691411 containerd[1714]: time="2025-09-13T00:01:33.691349824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:33.698714 containerd[1714]: time="2025-09-13T00:01:33.698622857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:01:33.700039 containerd[1714]: time="2025-09-13T00:01:33.698679137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:01:33.700139 containerd[1714]: time="2025-09-13T00:01:33.700100175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:33.700306 containerd[1714]: time="2025-09-13T00:01:33.700265095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:33.749768 systemd[1]: Started cri-containerd-44b2d70502b268fff13fd4ce8258765c460038ba70e9bf50b80f93242be33439.scope - libcontainer container 44b2d70502b268fff13fd4ce8258765c460038ba70e9bf50b80f93242be33439. Sep 13 00:01:33.751915 systemd[1]: Started cri-containerd-7c19f185731500cb203c0bd37a9b5b391d09bece897ea2064d85fc362e7d7ef7.scope - libcontainer container 7c19f185731500cb203c0bd37a9b5b391d09bece897ea2064d85fc362e7d7ef7. Sep 13 00:01:33.753659 systemd[1]: Started cri-containerd-d4a0093118cf2c6b045165186e60cb5e93d7abf24dbbe24e581ef90c525151e5.scope - libcontainer container d4a0093118cf2c6b045165186e60cb5e93d7abf24dbbe24e581ef90c525151e5. Sep 13 00:01:33.802689 containerd[1714]: time="2025-09-13T00:01:33.802652759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-n-a13ccab244,Uid:3e6ca955e58918c35bb9deaecc7c1601,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c19f185731500cb203c0bd37a9b5b391d09bece897ea2064d85fc362e7d7ef7\"" Sep 13 00:01:33.810110 containerd[1714]: time="2025-09-13T00:01:33.810058392Z" level=info msg="CreateContainer within sandbox \"7c19f185731500cb203c0bd37a9b5b391d09bece897ea2064d85fc362e7d7ef7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:01:33.814522 containerd[1714]: time="2025-09-13T00:01:33.814484788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-n-a13ccab244,Uid:d6801487393aeb0193e0ed47d5b3d7a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"44b2d70502b268fff13fd4ce8258765c460038ba70e9bf50b80f93242be33439\"" Sep 13 00:01:33.817915 containerd[1714]: time="2025-09-13T00:01:33.817880985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-n-a13ccab244,Uid:8d862ea35e3cdb569d5faf53f4ee0692,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4a0093118cf2c6b045165186e60cb5e93d7abf24dbbe24e581ef90c525151e5\"" Sep 13 00:01:33.818749 containerd[1714]: time="2025-09-13T00:01:33.818578664Z" level=info msg="CreateContainer within sandbox \"44b2d70502b268fff13fd4ce8258765c460038ba70e9bf50b80f93242be33439\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:01:33.820542 containerd[1714]: time="2025-09-13T00:01:33.820459742Z" level=info msg="CreateContainer within sandbox \"d4a0093118cf2c6b045165186e60cb5e93d7abf24dbbe24e581ef90c525151e5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:01:33.909749 kubelet[2760]: E0913 00:01:33.909704 2760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-a13ccab244?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="3.2s" Sep 13 00:01:33.939253 kubelet[2760]: W0913 00:01:33.937737 2760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.14:6443: connect: connection refused Sep 13 00:01:33.939253 kubelet[2760]: E0913 00:01:33.937805 2760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:01:33.964764 containerd[1714]: time="2025-09-13T00:01:33.964713487Z" level=info msg="CreateContainer within sandbox \"7c19f185731500cb203c0bd37a9b5b391d09bece897ea2064d85fc362e7d7ef7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f423c120ad82461e45b23929558fb47e7c2fc428f8723ad23a4bc2c92a2d848b\"" Sep 13 00:01:33.965422 containerd[1714]: time="2025-09-13T00:01:33.965383166Z" level=info msg="StartContainer for \"f423c120ad82461e45b23929558fb47e7c2fc428f8723ad23a4bc2c92a2d848b\"" Sep 13 00:01:33.982272 containerd[1714]: time="2025-09-13T00:01:33.982166430Z" level=info msg="CreateContainer within sandbox \"d4a0093118cf2c6b045165186e60cb5e93d7abf24dbbe24e581ef90c525151e5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"57d545721bd7f9b2d3a34332f3a73e3700fb3871f393465b58e5cfc81fd5ccb2\"" Sep 13 00:01:33.983716 containerd[1714]: time="2025-09-13T00:01:33.982721510Z" level=info msg="StartContainer for \"57d545721bd7f9b2d3a34332f3a73e3700fb3871f393465b58e5cfc81fd5ccb2\"" Sep 13 00:01:33.988708 containerd[1714]: time="2025-09-13T00:01:33.988672064Z" level=info msg="CreateContainer within sandbox \"44b2d70502b268fff13fd4ce8258765c460038ba70e9bf50b80f93242be33439\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9eefb15c66860f17beede173f546486ed3574478d8f8961dfe4373d3993e24e4\"" Sep 13 00:01:33.989168 containerd[1714]: time="2025-09-13T00:01:33.989138744Z" level=info msg="StartContainer for \"9eefb15c66860f17beede173f546486ed3574478d8f8961dfe4373d3993e24e4\"" Sep 13 00:01:33.990182 systemd[1]: Started cri-containerd-f423c120ad82461e45b23929558fb47e7c2fc428f8723ad23a4bc2c92a2d848b.scope - libcontainer container f423c120ad82461e45b23929558fb47e7c2fc428f8723ad23a4bc2c92a2d848b. Sep 13 00:01:34.015228 systemd[1]: Started cri-containerd-57d545721bd7f9b2d3a34332f3a73e3700fb3871f393465b58e5cfc81fd5ccb2.scope - libcontainer container 57d545721bd7f9b2d3a34332f3a73e3700fb3871f393465b58e5cfc81fd5ccb2. Sep 13 00:01:34.039173 systemd[1]: Started cri-containerd-9eefb15c66860f17beede173f546486ed3574478d8f8961dfe4373d3993e24e4.scope - libcontainer container 9eefb15c66860f17beede173f546486ed3574478d8f8961dfe4373d3993e24e4. Sep 13 00:01:34.061030 containerd[1714]: time="2025-09-13T00:01:34.059577637Z" level=info msg="StartContainer for \"f423c120ad82461e45b23929558fb47e7c2fc428f8723ad23a4bc2c92a2d848b\" returns successfully" Sep 13 00:01:34.093505 kubelet[2760]: I0913 00:01:34.093471 2760 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-a13ccab244" Sep 13 00:01:34.093811 kubelet[2760]: E0913 00:01:34.093772 2760 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4081.3.5-n-a13ccab244" Sep 13 00:01:34.097729 containerd[1714]: time="2025-09-13T00:01:34.097096762Z" level=info msg="StartContainer for \"57d545721bd7f9b2d3a34332f3a73e3700fb3871f393465b58e5cfc81fd5ccb2\" returns successfully" Sep 13 00:01:34.098002 containerd[1714]: time="2025-09-13T00:01:34.097178722Z" level=info msg="StartContainer for \"9eefb15c66860f17beede173f546486ed3574478d8f8961dfe4373d3993e24e4\" returns successfully" Sep 13 00:01:36.759069 kubelet[2760]: E0913 00:01:36.759012 2760 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081.3.5-n-a13ccab244" not found Sep 13 00:01:36.898861 kubelet[2760]: I0913 00:01:36.898821 2760 apiserver.go:52] "Watching apiserver" Sep 13 00:01:36.999750 kubelet[2760]: I0913 00:01:36.999700 2760 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:01:37.114632 kubelet[2760]: E0913 00:01:37.114488 2760 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.5-n-a13ccab244\" not found" node="ci-4081.3.5-n-a13ccab244" Sep 13 00:01:37.121515 kubelet[2760]: E0913 00:01:37.121474 2760 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081.3.5-n-a13ccab244" not found Sep 13 00:01:37.295467 kubelet[2760]: I0913 00:01:37.295437 2760 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-a13ccab244" Sep 13 00:01:37.305312 kubelet[2760]: I0913 00:01:37.305265 2760 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.5-n-a13ccab244" Sep 13 00:01:37.305577 kubelet[2760]: E0913 00:01:37.305450 2760 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.3.5-n-a13ccab244\": node \"ci-4081.3.5-n-a13ccab244\" not found" Sep 13 00:01:38.716535 systemd[1]: Reloading requested from client PID 3035 ('systemctl') (unit session-9.scope)... Sep 13 00:01:38.716550 systemd[1]: Reloading... Sep 13 00:01:38.813053 zram_generator::config[3071]: No configuration found. Sep 13 00:01:38.923108 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:01:39.016178 systemd[1]: Reloading finished in 299 ms. Sep 13 00:01:39.048796 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:01:39.049112 kubelet[2760]: I0913 00:01:39.048785 2760 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:01:39.064154 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:01:39.064378 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:01:39.064435 systemd[1]: kubelet.service: Consumed 1.446s CPU time, 127.5M memory peak, 0B memory swap peak. Sep 13 00:01:39.069245 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:01:39.175558 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:01:39.191334 (kubelet)[3139]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:01:39.222253 kubelet[3139]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:01:39.222253 kubelet[3139]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:01:39.222253 kubelet[3139]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:01:39.222574 kubelet[3139]: I0913 00:01:39.222318 3139 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:01:39.230477 kubelet[3139]: I0913 00:01:39.230439 3139 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:01:39.230477 kubelet[3139]: I0913 00:01:39.230471 3139 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:01:39.230716 kubelet[3139]: I0913 00:01:39.230697 3139 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:01:39.232888 kubelet[3139]: I0913 00:01:39.232842 3139 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:01:39.236041 kubelet[3139]: I0913 00:01:39.235924 3139 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:01:39.249293 kubelet[3139]: E0913 00:01:39.248113 3139 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:01:39.249293 kubelet[3139]: I0913 00:01:39.248149 3139 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:01:39.252853 kubelet[3139]: I0913 00:01:39.252812 3139 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:01:39.252958 kubelet[3139]: I0913 00:01:39.252951 3139 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:01:39.253147 kubelet[3139]: I0913 00:01:39.253121 3139 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:01:39.253399 kubelet[3139]: I0913 00:01:39.253148 3139 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-n-a13ccab244","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:01:39.253399 kubelet[3139]: I0913 00:01:39.253400 3139 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:01:39.253520 kubelet[3139]: I0913 00:01:39.253413 3139 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:01:39.253520 kubelet[3139]: I0913 00:01:39.253451 3139 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:01:39.253564 kubelet[3139]: I0913 00:01:39.253545 3139 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:01:39.253564 kubelet[3139]: I0913 00:01:39.253560 3139 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:01:39.253601 kubelet[3139]: I0913 00:01:39.253578 3139 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:01:39.253601 kubelet[3139]: I0913 00:01:39.253592 3139 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:01:39.257063 kubelet[3139]: I0913 00:01:39.255523 3139 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:01:39.260024 kubelet[3139]: I0913 00:01:39.257750 3139 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:01:39.263962 kubelet[3139]: I0913 00:01:39.262911 3139 server.go:1274] "Started kubelet" Sep 13 00:01:39.271900 kubelet[3139]: I0913 00:01:39.268611 3139 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:01:39.280062 kubelet[3139]: I0913 00:01:39.279969 3139 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:01:39.290175 kubelet[3139]: I0913 00:01:39.290141 3139 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:01:39.292032 kubelet[3139]: I0913 00:01:39.282498 3139 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:01:39.292032 kubelet[3139]: I0913 00:01:39.282075 3139 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:01:39.296005 kubelet[3139]: I0913 00:01:39.283498 3139 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:01:39.296495 kubelet[3139]: I0913 00:01:39.283507 3139 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:01:39.296495 kubelet[3139]: E0913 00:01:39.283612 3139 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-a13ccab244\" not found" Sep 13 00:01:39.296662 kubelet[3139]: I0913 00:01:39.296636 3139 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:01:39.307424 kubelet[3139]: I0913 00:01:39.307111 3139 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:01:39.307424 kubelet[3139]: I0913 00:01:39.307351 3139 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:01:39.311410 kubelet[3139]: I0913 00:01:39.311381 3139 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:01:39.315718 kubelet[3139]: I0913 00:01:39.314469 3139 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:01:39.326600 kubelet[3139]: I0913 00:01:39.326427 3139 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:01:39.327436 kubelet[3139]: I0913 00:01:39.327419 3139 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:01:39.327504 kubelet[3139]: I0913 00:01:39.327495 3139 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:01:39.327573 kubelet[3139]: I0913 00:01:39.327565 3139 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:01:39.327670 kubelet[3139]: E0913 00:01:39.327655 3139 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:01:39.335853 kubelet[3139]: E0913 00:01:39.335831 3139 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:01:39.370928 kubelet[3139]: I0913 00:01:39.370904 3139 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:01:39.371311 kubelet[3139]: I0913 00:01:39.371084 3139 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:01:39.371311 kubelet[3139]: I0913 00:01:39.371107 3139 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:01:39.371311 kubelet[3139]: I0913 00:01:39.371242 3139 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:01:39.371311 kubelet[3139]: I0913 00:01:39.371252 3139 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:01:39.371311 kubelet[3139]: I0913 00:01:39.371268 3139 policy_none.go:49] "None policy: Start" Sep 13 00:01:39.372176 kubelet[3139]: I0913 00:01:39.372100 3139 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:01:39.372957 kubelet[3139]: I0913 00:01:39.372263 3139 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:01:39.372957 kubelet[3139]: I0913 00:01:39.372403 3139 state_mem.go:75] "Updated machine memory state" Sep 13 00:01:39.376177 kubelet[3139]: I0913 00:01:39.376145 3139 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:01:39.376391 kubelet[3139]: I0913 00:01:39.376378 3139 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:01:39.376676 kubelet[3139]: I0913 00:01:39.376643 3139 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:01:39.378842 kubelet[3139]: I0913 00:01:39.378803 3139 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:01:39.438457 kubelet[3139]: W0913 00:01:39.438418 3139 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:01:39.442081 kubelet[3139]: W0913 00:01:39.441839 3139 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:01:39.442081 kubelet[3139]: W0913 00:01:39.441894 3139 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:01:39.484056 kubelet[3139]: I0913 00:01:39.481539 3139 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-a13ccab244" Sep 13 00:01:39.494165 kubelet[3139]: I0913 00:01:39.494135 3139 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.5-n-a13ccab244" Sep 13 00:01:39.495039 kubelet[3139]: I0913 00:01:39.494835 3139 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.5-n-a13ccab244" Sep 13 00:01:39.598133 kubelet[3139]: I0913 00:01:39.597983 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d6801487393aeb0193e0ed47d5b3d7a0-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-n-a13ccab244\" (UID: \"d6801487393aeb0193e0ed47d5b3d7a0\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-a13ccab244" Sep 13 00:01:39.598461 kubelet[3139]: I0913 00:01:39.598281 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d6801487393aeb0193e0ed47d5b3d7a0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-n-a13ccab244\" (UID: \"d6801487393aeb0193e0ed47d5b3d7a0\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-a13ccab244" Sep 13 00:01:39.598461 kubelet[3139]: I0913 00:01:39.598311 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e6ca955e58918c35bb9deaecc7c1601-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-a13ccab244\" (UID: \"3e6ca955e58918c35bb9deaecc7c1601\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-a13ccab244" Sep 13 00:01:39.598461 kubelet[3139]: I0913 00:01:39.598328 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e6ca955e58918c35bb9deaecc7c1601-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-a13ccab244\" (UID: \"3e6ca955e58918c35bb9deaecc7c1601\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-a13ccab244" Sep 13 00:01:39.598461 kubelet[3139]: I0913 00:01:39.598344 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e6ca955e58918c35bb9deaecc7c1601-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-n-a13ccab244\" (UID: \"3e6ca955e58918c35bb9deaecc7c1601\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-a13ccab244" Sep 13 00:01:39.598461 kubelet[3139]: I0913 00:01:39.598369 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d6801487393aeb0193e0ed47d5b3d7a0-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-n-a13ccab244\" (UID: \"d6801487393aeb0193e0ed47d5b3d7a0\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-a13ccab244" Sep 13 00:01:39.598598 kubelet[3139]: I0913 00:01:39.598393 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3e6ca955e58918c35bb9deaecc7c1601-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-n-a13ccab244\" (UID: \"3e6ca955e58918c35bb9deaecc7c1601\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-a13ccab244" Sep 13 00:01:39.598598 kubelet[3139]: I0913 00:01:39.598409 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3e6ca955e58918c35bb9deaecc7c1601-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-n-a13ccab244\" (UID: \"3e6ca955e58918c35bb9deaecc7c1601\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-a13ccab244" Sep 13 00:01:39.598598 kubelet[3139]: I0913 00:01:39.598424 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d862ea35e3cdb569d5faf53f4ee0692-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-n-a13ccab244\" (UID: \"8d862ea35e3cdb569d5faf53f4ee0692\") " pod="kube-system/kube-scheduler-ci-4081.3.5-n-a13ccab244" Sep 13 00:01:40.267852 kubelet[3139]: I0913 00:01:40.267770 3139 apiserver.go:52] "Watching apiserver" Sep 13 00:01:40.296752 kubelet[3139]: I0913 00:01:40.296702 3139 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:01:40.368124 kubelet[3139]: W0913 00:01:40.367874 3139 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:01:40.368124 kubelet[3139]: E0913 00:01:40.367935 3139 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.5-n-a13ccab244\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.5-n-a13ccab244" Sep 13 00:01:40.377400 kubelet[3139]: I0913 00:01:40.377313 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.5-n-a13ccab244" podStartSLOduration=1.377295815 podStartE2EDuration="1.377295815s" podCreationTimestamp="2025-09-13 00:01:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:01:40.359436903 +0000 UTC m=+1.165118206" watchObservedRunningTime="2025-09-13 00:01:40.377295815 +0000 UTC m=+1.182977078" Sep 13 00:01:40.392551 kubelet[3139]: I0913 00:01:40.392491 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.5-n-a13ccab244" podStartSLOduration=1.3924751739999999 podStartE2EDuration="1.392475174s" podCreationTimestamp="2025-09-13 00:01:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:01:40.377466455 +0000 UTC m=+1.183147758" watchObservedRunningTime="2025-09-13 00:01:40.392475174 +0000 UTC m=+1.198156477" Sep 13 00:01:40.407035 kubelet[3139]: I0913 00:01:40.406864 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-a13ccab244" podStartSLOduration=1.4068499349999999 podStartE2EDuration="1.406849935s" podCreationTimestamp="2025-09-13 00:01:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:01:40.393560931 +0000 UTC m=+1.199242234" watchObservedRunningTime="2025-09-13 00:01:40.406849935 +0000 UTC m=+1.212531238" Sep 13 00:01:45.164201 kubelet[3139]: I0913 00:01:45.164166 3139 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:01:45.164940 containerd[1714]: time="2025-09-13T00:01:45.164766250Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:01:45.165197 kubelet[3139]: I0913 00:01:45.164930 3139 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:01:45.682259 systemd[1]: Created slice kubepods-besteffort-pod77261037_bbc8_4790_8d03_26f9ee1dd37d.slice - libcontainer container kubepods-besteffort-pod77261037_bbc8_4790_8d03_26f9ee1dd37d.slice. Sep 13 00:01:45.731617 kubelet[3139]: I0913 00:01:45.731571 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/77261037-bbc8-4790-8d03-26f9ee1dd37d-kube-proxy\") pod \"kube-proxy-nmbdc\" (UID: \"77261037-bbc8-4790-8d03-26f9ee1dd37d\") " pod="kube-system/kube-proxy-nmbdc" Sep 13 00:01:45.731617 kubelet[3139]: I0913 00:01:45.731614 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77261037-bbc8-4790-8d03-26f9ee1dd37d-lib-modules\") pod \"kube-proxy-nmbdc\" (UID: \"77261037-bbc8-4790-8d03-26f9ee1dd37d\") " pod="kube-system/kube-proxy-nmbdc" Sep 13 00:01:45.731772 kubelet[3139]: I0913 00:01:45.731634 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rng2n\" (UniqueName: \"kubernetes.io/projected/77261037-bbc8-4790-8d03-26f9ee1dd37d-kube-api-access-rng2n\") pod \"kube-proxy-nmbdc\" (UID: \"77261037-bbc8-4790-8d03-26f9ee1dd37d\") " pod="kube-system/kube-proxy-nmbdc" Sep 13 00:01:45.731772 kubelet[3139]: I0913 00:01:45.731656 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77261037-bbc8-4790-8d03-26f9ee1dd37d-xtables-lock\") pod \"kube-proxy-nmbdc\" (UID: \"77261037-bbc8-4790-8d03-26f9ee1dd37d\") " pod="kube-system/kube-proxy-nmbdc" Sep 13 00:01:45.840520 kubelet[3139]: E0913 00:01:45.840479 3139 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 13 00:01:45.840520 kubelet[3139]: E0913 00:01:45.840515 3139 projected.go:194] Error preparing data for projected volume kube-api-access-rng2n for pod kube-system/kube-proxy-nmbdc: configmap "kube-root-ca.crt" not found Sep 13 00:01:45.840688 kubelet[3139]: E0913 00:01:45.840574 3139 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/77261037-bbc8-4790-8d03-26f9ee1dd37d-kube-api-access-rng2n podName:77261037-bbc8-4790-8d03-26f9ee1dd37d nodeName:}" failed. No retries permitted until 2025-09-13 00:01:46.340554378 +0000 UTC m=+7.146235681 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rng2n" (UniqueName: "kubernetes.io/projected/77261037-bbc8-4790-8d03-26f9ee1dd37d-kube-api-access-rng2n") pod "kube-proxy-nmbdc" (UID: "77261037-bbc8-4790-8d03-26f9ee1dd37d") : configmap "kube-root-ca.crt" not found Sep 13 00:01:46.314043 systemd[1]: Created slice kubepods-besteffort-pod8e2836bf_dea1_40a4_a093_4bde2f5fd224.slice - libcontainer container kubepods-besteffort-pod8e2836bf_dea1_40a4_a093_4bde2f5fd224.slice. Sep 13 00:01:46.335645 kubelet[3139]: I0913 00:01:46.335608 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h4nx\" (UniqueName: \"kubernetes.io/projected/8e2836bf-dea1-40a4-a093-4bde2f5fd224-kube-api-access-9h4nx\") pod \"tigera-operator-58fc44c59b-s5d5q\" (UID: \"8e2836bf-dea1-40a4-a093-4bde2f5fd224\") " pod="tigera-operator/tigera-operator-58fc44c59b-s5d5q" Sep 13 00:01:46.336102 kubelet[3139]: I0913 00:01:46.336082 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8e2836bf-dea1-40a4-a093-4bde2f5fd224-var-lib-calico\") pod \"tigera-operator-58fc44c59b-s5d5q\" (UID: \"8e2836bf-dea1-40a4-a093-4bde2f5fd224\") " pod="tigera-operator/tigera-operator-58fc44c59b-s5d5q" Sep 13 00:01:46.590917 containerd[1714]: time="2025-09-13T00:01:46.590819873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nmbdc,Uid:77261037-bbc8-4790-8d03-26f9ee1dd37d,Namespace:kube-system,Attempt:0,}" Sep 13 00:01:46.617714 containerd[1714]: time="2025-09-13T00:01:46.617671192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-s5d5q,Uid:8e2836bf-dea1-40a4-a093-4bde2f5fd224,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:01:46.634531 containerd[1714]: time="2025-09-13T00:01:46.634434367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:01:46.634531 containerd[1714]: time="2025-09-13T00:01:46.634492567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:01:46.634531 containerd[1714]: time="2025-09-13T00:01:46.634507807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:46.634930 containerd[1714]: time="2025-09-13T00:01:46.634577927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:46.658197 systemd[1]: Started cri-containerd-31b5970fa016ed0437c163c43f5560d4361fbde7ec018f8c807e2dd38e22df6d.scope - libcontainer container 31b5970fa016ed0437c163c43f5560d4361fbde7ec018f8c807e2dd38e22df6d. Sep 13 00:01:46.672298 containerd[1714]: time="2025-09-13T00:01:46.671779830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:01:46.677120 containerd[1714]: time="2025-09-13T00:01:46.674755345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:01:46.677120 containerd[1714]: time="2025-09-13T00:01:46.674778225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:46.677120 containerd[1714]: time="2025-09-13T00:01:46.674871225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:46.682519 containerd[1714]: time="2025-09-13T00:01:46.682448773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nmbdc,Uid:77261037-bbc8-4790-8d03-26f9ee1dd37d,Namespace:kube-system,Attempt:0,} returns sandbox id \"31b5970fa016ed0437c163c43f5560d4361fbde7ec018f8c807e2dd38e22df6d\"" Sep 13 00:01:46.686860 containerd[1714]: time="2025-09-13T00:01:46.686734847Z" level=info msg="CreateContainer within sandbox \"31b5970fa016ed0437c163c43f5560d4361fbde7ec018f8c807e2dd38e22df6d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:01:46.696247 systemd[1]: Started cri-containerd-015c51b5f072e93d1958c38171652c9e1efcb7fccf4c6332be37e604e44de4ba.scope - libcontainer container 015c51b5f072e93d1958c38171652c9e1efcb7fccf4c6332be37e604e44de4ba. Sep 13 00:01:46.724561 containerd[1714]: time="2025-09-13T00:01:46.724415309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-s5d5q,Uid:8e2836bf-dea1-40a4-a093-4bde2f5fd224,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"015c51b5f072e93d1958c38171652c9e1efcb7fccf4c6332be37e604e44de4ba\"" Sep 13 00:01:46.725752 containerd[1714]: time="2025-09-13T00:01:46.725730307Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:01:46.744898 containerd[1714]: time="2025-09-13T00:01:46.744849478Z" level=info msg="CreateContainer within sandbox \"31b5970fa016ed0437c163c43f5560d4361fbde7ec018f8c807e2dd38e22df6d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f4989ed532a832e0efc48b8e0b8fcab721ebee342e6d96c27d7a9a8ce40b4a95\"" Sep 13 00:01:46.745714 containerd[1714]: time="2025-09-13T00:01:46.745607157Z" level=info msg="StartContainer for \"f4989ed532a832e0efc48b8e0b8fcab721ebee342e6d96c27d7a9a8ce40b4a95\"" Sep 13 00:01:46.779191 systemd[1]: Started cri-containerd-f4989ed532a832e0efc48b8e0b8fcab721ebee342e6d96c27d7a9a8ce40b4a95.scope - libcontainer container f4989ed532a832e0efc48b8e0b8fcab721ebee342e6d96c27d7a9a8ce40b4a95. Sep 13 00:01:46.812695 containerd[1714]: time="2025-09-13T00:01:46.812566175Z" level=info msg="StartContainer for \"f4989ed532a832e0efc48b8e0b8fcab721ebee342e6d96c27d7a9a8ce40b4a95\" returns successfully" Sep 13 00:01:47.451448 kubelet[3139]: I0913 00:01:47.451281 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nmbdc" podStartSLOduration=2.4512625200000002 podStartE2EDuration="2.45126252s" podCreationTimestamp="2025-09-13 00:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:01:47.37938991 +0000 UTC m=+8.185071293" watchObservedRunningTime="2025-09-13 00:01:47.45126252 +0000 UTC m=+8.256943823" Sep 13 00:01:48.272554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1234406074.mount: Deactivated successfully. Sep 13 00:01:48.679235 containerd[1714]: time="2025-09-13T00:01:48.679191446Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:48.684045 containerd[1714]: time="2025-09-13T00:01:48.683853559Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 13 00:01:48.687845 containerd[1714]: time="2025-09-13T00:01:48.687787713Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:48.692960 containerd[1714]: time="2025-09-13T00:01:48.692907265Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:48.693958 containerd[1714]: time="2025-09-13T00:01:48.693837664Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 1.967981797s" Sep 13 00:01:48.693958 containerd[1714]: time="2025-09-13T00:01:48.693870344Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 13 00:01:48.697480 containerd[1714]: time="2025-09-13T00:01:48.697443578Z" level=info msg="CreateContainer within sandbox \"015c51b5f072e93d1958c38171652c9e1efcb7fccf4c6332be37e604e44de4ba\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:01:48.733266 containerd[1714]: time="2025-09-13T00:01:48.733148244Z" level=info msg="CreateContainer within sandbox \"015c51b5f072e93d1958c38171652c9e1efcb7fccf4c6332be37e604e44de4ba\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f7d19687258f705ed182ce9d667c67297a173fe965408dc13f1f5aecce541850\"" Sep 13 00:01:48.734544 containerd[1714]: time="2025-09-13T00:01:48.733699363Z" level=info msg="StartContainer for \"f7d19687258f705ed182ce9d667c67297a173fe965408dc13f1f5aecce541850\"" Sep 13 00:01:48.760200 systemd[1]: Started cri-containerd-f7d19687258f705ed182ce9d667c67297a173fe965408dc13f1f5aecce541850.scope - libcontainer container f7d19687258f705ed182ce9d667c67297a173fe965408dc13f1f5aecce541850. Sep 13 00:01:48.790156 containerd[1714]: time="2025-09-13T00:01:48.790108877Z" level=info msg="StartContainer for \"f7d19687258f705ed182ce9d667c67297a173fe965408dc13f1f5aecce541850\" returns successfully" Sep 13 00:01:50.403697 kubelet[3139]: I0913 00:01:50.403629 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-s5d5q" podStartSLOduration=2.43391942 podStartE2EDuration="4.403598934s" podCreationTimestamp="2025-09-13 00:01:46 +0000 UTC" firstStartedPulling="2025-09-13 00:01:46.725387188 +0000 UTC m=+7.531068451" lastFinishedPulling="2025-09-13 00:01:48.695066662 +0000 UTC m=+9.500747965" observedRunningTime="2025-09-13 00:01:49.38456245 +0000 UTC m=+10.190243833" watchObservedRunningTime="2025-09-13 00:01:50.403598934 +0000 UTC m=+11.209280237" Sep 13 00:01:54.705849 sudo[2219]: pam_unix(sudo:session): session closed for user root Sep 13 00:01:54.790288 sshd[2216]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:54.793857 systemd[1]: sshd@6-10.200.20.14:22-10.200.16.10:33640.service: Deactivated successfully. Sep 13 00:01:54.796082 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:01:54.796347 systemd[1]: session-9.scope: Consumed 5.838s CPU time, 153.4M memory peak, 0B memory swap peak. Sep 13 00:01:54.797470 systemd-logind[1688]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:01:54.798821 systemd-logind[1688]: Removed session 9. Sep 13 00:02:01.182507 systemd[1]: Created slice kubepods-besteffort-podfa521b19_eabc_4261_92f3_a02b1ac9bb26.slice - libcontainer container kubepods-besteffort-podfa521b19_eabc_4261_92f3_a02b1ac9bb26.slice. Sep 13 00:02:01.226035 kubelet[3139]: I0913 00:02:01.225927 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa521b19-eabc-4261-92f3-a02b1ac9bb26-tigera-ca-bundle\") pod \"calico-typha-84d5fc849c-x9856\" (UID: \"fa521b19-eabc-4261-92f3-a02b1ac9bb26\") " pod="calico-system/calico-typha-84d5fc849c-x9856" Sep 13 00:02:01.226035 kubelet[3139]: I0913 00:02:01.225975 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fa521b19-eabc-4261-92f3-a02b1ac9bb26-typha-certs\") pod \"calico-typha-84d5fc849c-x9856\" (UID: \"fa521b19-eabc-4261-92f3-a02b1ac9bb26\") " pod="calico-system/calico-typha-84d5fc849c-x9856" Sep 13 00:02:01.226035 kubelet[3139]: I0913 00:02:01.225996 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bspbg\" (UniqueName: \"kubernetes.io/projected/fa521b19-eabc-4261-92f3-a02b1ac9bb26-kube-api-access-bspbg\") pod \"calico-typha-84d5fc849c-x9856\" (UID: \"fa521b19-eabc-4261-92f3-a02b1ac9bb26\") " pod="calico-system/calico-typha-84d5fc849c-x9856" Sep 13 00:02:01.433907 systemd[1]: Created slice kubepods-besteffort-pod3e9a8c16_2d6e_4796_ae57_599490a0b2e3.slice - libcontainer container kubepods-besteffort-pod3e9a8c16_2d6e_4796_ae57_599490a0b2e3.slice. Sep 13 00:02:01.486692 containerd[1714]: time="2025-09-13T00:02:01.486633102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84d5fc849c-x9856,Uid:fa521b19-eabc-4261-92f3-a02b1ac9bb26,Namespace:calico-system,Attempt:0,}" Sep 13 00:02:01.528271 kubelet[3139]: I0913 00:02:01.527637 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3e9a8c16-2d6e-4796-ae57-599490a0b2e3-cni-log-dir\") pod \"calico-node-2s4m4\" (UID: \"3e9a8c16-2d6e-4796-ae57-599490a0b2e3\") " pod="calico-system/calico-node-2s4m4" Sep 13 00:02:01.528271 kubelet[3139]: I0913 00:02:01.527697 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3e9a8c16-2d6e-4796-ae57-599490a0b2e3-node-certs\") pod \"calico-node-2s4m4\" (UID: \"3e9a8c16-2d6e-4796-ae57-599490a0b2e3\") " pod="calico-system/calico-node-2s4m4" Sep 13 00:02:01.528271 kubelet[3139]: I0913 00:02:01.527718 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3e9a8c16-2d6e-4796-ae57-599490a0b2e3-cni-bin-dir\") pod \"calico-node-2s4m4\" (UID: \"3e9a8c16-2d6e-4796-ae57-599490a0b2e3\") " pod="calico-system/calico-node-2s4m4" Sep 13 00:02:01.528271 kubelet[3139]: I0913 00:02:01.527741 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e9a8c16-2d6e-4796-ae57-599490a0b2e3-tigera-ca-bundle\") pod \"calico-node-2s4m4\" (UID: \"3e9a8c16-2d6e-4796-ae57-599490a0b2e3\") " pod="calico-system/calico-node-2s4m4" Sep 13 00:02:01.528271 kubelet[3139]: I0913 00:02:01.527766 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3e9a8c16-2d6e-4796-ae57-599490a0b2e3-var-lib-calico\") pod \"calico-node-2s4m4\" (UID: \"3e9a8c16-2d6e-4796-ae57-599490a0b2e3\") " pod="calico-system/calico-node-2s4m4" Sep 13 00:02:01.530299 kubelet[3139]: I0913 00:02:01.527782 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3e9a8c16-2d6e-4796-ae57-599490a0b2e3-cni-net-dir\") pod \"calico-node-2s4m4\" (UID: \"3e9a8c16-2d6e-4796-ae57-599490a0b2e3\") " pod="calico-system/calico-node-2s4m4" Sep 13 00:02:01.530299 kubelet[3139]: I0913 00:02:01.527796 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbj4c\" (UniqueName: \"kubernetes.io/projected/3e9a8c16-2d6e-4796-ae57-599490a0b2e3-kube-api-access-wbj4c\") pod \"calico-node-2s4m4\" (UID: \"3e9a8c16-2d6e-4796-ae57-599490a0b2e3\") " pod="calico-system/calico-node-2s4m4" Sep 13 00:02:01.530299 kubelet[3139]: I0913 00:02:01.527813 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3e9a8c16-2d6e-4796-ae57-599490a0b2e3-policysync\") pod \"calico-node-2s4m4\" (UID: \"3e9a8c16-2d6e-4796-ae57-599490a0b2e3\") " pod="calico-system/calico-node-2s4m4" Sep 13 00:02:01.530299 kubelet[3139]: I0913 00:02:01.527827 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3e9a8c16-2d6e-4796-ae57-599490a0b2e3-var-run-calico\") pod \"calico-node-2s4m4\" (UID: \"3e9a8c16-2d6e-4796-ae57-599490a0b2e3\") " pod="calico-system/calico-node-2s4m4" Sep 13 00:02:01.530299 kubelet[3139]: I0913 00:02:01.527841 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e9a8c16-2d6e-4796-ae57-599490a0b2e3-lib-modules\") pod \"calico-node-2s4m4\" (UID: \"3e9a8c16-2d6e-4796-ae57-599490a0b2e3\") " pod="calico-system/calico-node-2s4m4" Sep 13 00:02:01.530408 kubelet[3139]: I0913 00:02:01.527858 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3e9a8c16-2d6e-4796-ae57-599490a0b2e3-flexvol-driver-host\") pod \"calico-node-2s4m4\" (UID: \"3e9a8c16-2d6e-4796-ae57-599490a0b2e3\") " pod="calico-system/calico-node-2s4m4" Sep 13 00:02:01.530408 kubelet[3139]: I0913 00:02:01.527876 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e9a8c16-2d6e-4796-ae57-599490a0b2e3-xtables-lock\") pod \"calico-node-2s4m4\" (UID: \"3e9a8c16-2d6e-4796-ae57-599490a0b2e3\") " pod="calico-system/calico-node-2s4m4" Sep 13 00:02:01.543648 containerd[1714]: time="2025-09-13T00:02:01.543326304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:02:01.543648 containerd[1714]: time="2025-09-13T00:02:01.543399424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:02:01.543648 containerd[1714]: time="2025-09-13T00:02:01.543415984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:01.543648 containerd[1714]: time="2025-09-13T00:02:01.543512263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:01.566997 kubelet[3139]: E0913 00:02:01.566919 3139 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xm8sj" podUID="a5fb5c00-ba33-4510-871e-6572e3bb79c8" Sep 13 00:02:01.573231 systemd[1]: Started cri-containerd-dca3b2e72bfc42d43b92ff631afaa7a40e60ac6b9e402d3a504eee6e13fd2e97.scope - libcontainer container dca3b2e72bfc42d43b92ff631afaa7a40e60ac6b9e402d3a504eee6e13fd2e97. Sep 13 00:02:01.628909 kubelet[3139]: I0913 00:02:01.628646 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a5fb5c00-ba33-4510-871e-6572e3bb79c8-varrun\") pod \"csi-node-driver-xm8sj\" (UID: \"a5fb5c00-ba33-4510-871e-6572e3bb79c8\") " pod="calico-system/csi-node-driver-xm8sj" Sep 13 00:02:01.628909 kubelet[3139]: I0913 00:02:01.628708 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a5fb5c00-ba33-4510-871e-6572e3bb79c8-kubelet-dir\") pod \"csi-node-driver-xm8sj\" (UID: \"a5fb5c00-ba33-4510-871e-6572e3bb79c8\") " pod="calico-system/csi-node-driver-xm8sj" Sep 13 00:02:01.628909 kubelet[3139]: I0913 00:02:01.628727 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw8zs\" (UniqueName: \"kubernetes.io/projected/a5fb5c00-ba33-4510-871e-6572e3bb79c8-kube-api-access-rw8zs\") pod \"csi-node-driver-xm8sj\" (UID: \"a5fb5c00-ba33-4510-871e-6572e3bb79c8\") " pod="calico-system/csi-node-driver-xm8sj" Sep 13 00:02:01.628909 kubelet[3139]: I0913 00:02:01.628767 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a5fb5c00-ba33-4510-871e-6572e3bb79c8-registration-dir\") pod \"csi-node-driver-xm8sj\" (UID: \"a5fb5c00-ba33-4510-871e-6572e3bb79c8\") " pod="calico-system/csi-node-driver-xm8sj" Sep 13 00:02:01.628909 kubelet[3139]: I0913 00:02:01.628781 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a5fb5c00-ba33-4510-871e-6572e3bb79c8-socket-dir\") pod \"csi-node-driver-xm8sj\" (UID: \"a5fb5c00-ba33-4510-871e-6572e3bb79c8\") " pod="calico-system/csi-node-driver-xm8sj" Sep 13 00:02:01.636198 kubelet[3139]: E0913 00:02:01.635998 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.636198 kubelet[3139]: W0913 00:02:01.636072 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.636198 kubelet[3139]: E0913 00:02:01.636103 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.636877 kubelet[3139]: E0913 00:02:01.636861 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.637114 kubelet[3139]: W0913 00:02:01.636965 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.637114 kubelet[3139]: E0913 00:02:01.636994 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.637710 kubelet[3139]: E0913 00:02:01.637694 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.637859 kubelet[3139]: W0913 00:02:01.637785 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.637859 kubelet[3139]: E0913 00:02:01.637812 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.638359 kubelet[3139]: E0913 00:02:01.638225 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.638359 kubelet[3139]: W0913 00:02:01.638281 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.638359 kubelet[3139]: E0913 00:02:01.638302 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.638723 kubelet[3139]: E0913 00:02:01.638622 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.638723 kubelet[3139]: W0913 00:02:01.638639 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.639066 kubelet[3139]: E0913 00:02:01.639038 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.639277 kubelet[3139]: E0913 00:02:01.639255 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.639277 kubelet[3139]: W0913 00:02:01.639272 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.639527 kubelet[3139]: E0913 00:02:01.639424 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.640118 kubelet[3139]: E0913 00:02:01.640084 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.640118 kubelet[3139]: W0913 00:02:01.640105 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.640118 kubelet[3139]: E0913 00:02:01.640140 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.640516 kubelet[3139]: E0913 00:02:01.640324 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.640516 kubelet[3139]: W0913 00:02:01.640333 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.640516 kubelet[3139]: E0913 00:02:01.640432 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.641413 kubelet[3139]: E0913 00:02:01.641185 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.641413 kubelet[3139]: W0913 00:02:01.641198 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.641413 kubelet[3139]: E0913 00:02:01.641221 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.641741 kubelet[3139]: E0913 00:02:01.641637 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.641741 kubelet[3139]: W0913 00:02:01.641658 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.641741 kubelet[3139]: E0913 00:02:01.641691 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.642301 kubelet[3139]: E0913 00:02:01.642275 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.642301 kubelet[3139]: W0913 00:02:01.642297 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.642372 kubelet[3139]: E0913 00:02:01.642318 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.643235 kubelet[3139]: E0913 00:02:01.643208 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.643235 kubelet[3139]: W0913 00:02:01.643233 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.643363 kubelet[3139]: E0913 00:02:01.643326 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.643682 kubelet[3139]: E0913 00:02:01.643661 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.643682 kubelet[3139]: W0913 00:02:01.643679 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.643742 kubelet[3139]: E0913 00:02:01.643694 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.650728 kubelet[3139]: E0913 00:02:01.650611 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.650728 kubelet[3139]: W0913 00:02:01.650697 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.651005 kubelet[3139]: E0913 00:02:01.650838 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.653260 containerd[1714]: time="2025-09-13T00:02:01.653219034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84d5fc849c-x9856,Uid:fa521b19-eabc-4261-92f3-a02b1ac9bb26,Namespace:calico-system,Attempt:0,} returns sandbox id \"dca3b2e72bfc42d43b92ff631afaa7a40e60ac6b9e402d3a504eee6e13fd2e97\"" Sep 13 00:02:01.656418 containerd[1714]: time="2025-09-13T00:02:01.656375028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:02:01.730541 kubelet[3139]: E0913 00:02:01.729723 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.730541 kubelet[3139]: W0913 00:02:01.729747 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.730541 kubelet[3139]: E0913 00:02:01.729767 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.730541 kubelet[3139]: E0913 00:02:01.729979 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.730541 kubelet[3139]: W0913 00:02:01.729987 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.730541 kubelet[3139]: E0913 00:02:01.730003 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.730541 kubelet[3139]: E0913 00:02:01.730230 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.730541 kubelet[3139]: W0913 00:02:01.730239 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.730541 kubelet[3139]: E0913 00:02:01.730265 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.730541 kubelet[3139]: E0913 00:02:01.730403 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.730817 kubelet[3139]: W0913 00:02:01.730409 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.730817 kubelet[3139]: E0913 00:02:01.730420 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.733137 kubelet[3139]: E0913 00:02:01.732242 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.733137 kubelet[3139]: W0913 00:02:01.732261 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.733137 kubelet[3139]: E0913 00:02:01.732274 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.733137 kubelet[3139]: E0913 00:02:01.732959 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.733137 kubelet[3139]: W0913 00:02:01.732970 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.733137 kubelet[3139]: E0913 00:02:01.732999 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.733420 kubelet[3139]: E0913 00:02:01.733211 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.733420 kubelet[3139]: W0913 00:02:01.733221 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.733420 kubelet[3139]: E0913 00:02:01.733279 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.734403 kubelet[3139]: E0913 00:02:01.734347 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.734403 kubelet[3139]: W0913 00:02:01.734363 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.734763 kubelet[3139]: E0913 00:02:01.734554 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.734763 kubelet[3139]: E0913 00:02:01.734625 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.734763 kubelet[3139]: W0913 00:02:01.734634 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.734763 kubelet[3139]: E0913 00:02:01.734707 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.734884 kubelet[3139]: E0913 00:02:01.734838 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.734884 kubelet[3139]: W0913 00:02:01.734846 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.734928 kubelet[3139]: E0913 00:02:01.734891 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.735360 kubelet[3139]: E0913 00:02:01.735054 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.735360 kubelet[3139]: W0913 00:02:01.735067 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.735360 kubelet[3139]: E0913 00:02:01.735115 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.735360 kubelet[3139]: E0913 00:02:01.735241 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.735360 kubelet[3139]: W0913 00:02:01.735250 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.735360 kubelet[3139]: E0913 00:02:01.735290 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.735578 kubelet[3139]: E0913 00:02:01.735484 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.735578 kubelet[3139]: W0913 00:02:01.735492 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.735578 kubelet[3139]: E0913 00:02:01.735507 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.735981 kubelet[3139]: E0913 00:02:01.735704 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.735981 kubelet[3139]: W0913 00:02:01.735719 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.735981 kubelet[3139]: E0913 00:02:01.735738 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.735981 kubelet[3139]: E0913 00:02:01.735900 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.735981 kubelet[3139]: W0913 00:02:01.735908 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.736156 kubelet[3139]: E0913 00:02:01.735921 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.736156 kubelet[3139]: E0913 00:02:01.736118 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.736156 kubelet[3139]: W0913 00:02:01.736124 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.736639 kubelet[3139]: E0913 00:02:01.736139 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.736639 kubelet[3139]: E0913 00:02:01.736303 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.736639 kubelet[3139]: W0913 00:02:01.736310 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.736639 kubelet[3139]: E0913 00:02:01.736324 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.736639 kubelet[3139]: E0913 00:02:01.736480 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.736639 kubelet[3139]: W0913 00:02:01.736487 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.736639 kubelet[3139]: E0913 00:02:01.736536 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.738269 kubelet[3139]: E0913 00:02:01.736691 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.738269 kubelet[3139]: W0913 00:02:01.736700 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.738269 kubelet[3139]: E0913 00:02:01.736804 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.738269 kubelet[3139]: E0913 00:02:01.736908 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.738269 kubelet[3139]: W0913 00:02:01.736916 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.738269 kubelet[3139]: E0913 00:02:01.736927 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.738269 kubelet[3139]: E0913 00:02:01.737336 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.738269 kubelet[3139]: W0913 00:02:01.737348 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.738269 kubelet[3139]: E0913 00:02:01.737365 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.738269 kubelet[3139]: E0913 00:02:01.737559 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.738477 kubelet[3139]: W0913 00:02:01.737570 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.738477 kubelet[3139]: E0913 00:02:01.738013 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.738477 kubelet[3139]: E0913 00:02:01.738437 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.738477 kubelet[3139]: W0913 00:02:01.738450 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.738477 kubelet[3139]: E0913 00:02:01.738464 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.738734 containerd[1714]: time="2025-09-13T00:02:01.738595736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2s4m4,Uid:3e9a8c16-2d6e-4796-ae57-599490a0b2e3,Namespace:calico-system,Attempt:0,}" Sep 13 00:02:01.739244 kubelet[3139]: E0913 00:02:01.739013 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.739244 kubelet[3139]: W0913 00:02:01.739036 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.739244 kubelet[3139]: E0913 00:02:01.739048 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.739795 kubelet[3139]: E0913 00:02:01.739724 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.739795 kubelet[3139]: W0913 00:02:01.739795 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.739911 kubelet[3139]: E0913 00:02:01.739810 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.751544 kubelet[3139]: E0913 00:02:01.751504 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:01.751544 kubelet[3139]: W0913 00:02:01.751528 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:01.751544 kubelet[3139]: E0913 00:02:01.751555 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:01.795927 containerd[1714]: time="2025-09-13T00:02:01.795584577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:02:01.795927 containerd[1714]: time="2025-09-13T00:02:01.795659937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:02:01.795927 containerd[1714]: time="2025-09-13T00:02:01.795676017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:01.797211 containerd[1714]: time="2025-09-13T00:02:01.795904056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:01.825430 systemd[1]: Started cri-containerd-f71219d51b303f89aff72a54bb6b397d667eed3fec3f920d12b1e2a24444a3db.scope - libcontainer container f71219d51b303f89aff72a54bb6b397d667eed3fec3f920d12b1e2a24444a3db. Sep 13 00:02:01.860114 containerd[1714]: time="2025-09-13T00:02:01.859843642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2s4m4,Uid:3e9a8c16-2d6e-4796-ae57-599490a0b2e3,Namespace:calico-system,Attempt:0,} returns sandbox id \"f71219d51b303f89aff72a54bb6b397d667eed3fec3f920d12b1e2a24444a3db\"" Sep 13 00:02:02.867508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2377913401.mount: Deactivated successfully. Sep 13 00:02:03.328609 kubelet[3139]: E0913 00:02:03.328556 3139 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xm8sj" podUID="a5fb5c00-ba33-4510-871e-6572e3bb79c8" Sep 13 00:02:03.405825 containerd[1714]: time="2025-09-13T00:02:03.405756893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:03.409993 containerd[1714]: time="2025-09-13T00:02:03.409953244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 13 00:02:03.414510 containerd[1714]: time="2025-09-13T00:02:03.414445475Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:03.419602 containerd[1714]: time="2025-09-13T00:02:03.419543504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:03.420755 containerd[1714]: time="2025-09-13T00:02:03.420498822Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.763999235s" Sep 13 00:02:03.420755 containerd[1714]: time="2025-09-13T00:02:03.420532782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 13 00:02:03.422164 containerd[1714]: time="2025-09-13T00:02:03.422098779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:02:03.435320 containerd[1714]: time="2025-09-13T00:02:03.435186471Z" level=info msg="CreateContainer within sandbox \"dca3b2e72bfc42d43b92ff631afaa7a40e60ac6b9e402d3a504eee6e13fd2e97\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:02:03.476285 containerd[1714]: time="2025-09-13T00:02:03.476178146Z" level=info msg="CreateContainer within sandbox \"dca3b2e72bfc42d43b92ff631afaa7a40e60ac6b9e402d3a504eee6e13fd2e97\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b96945bd1cf1eb66411ca5b1299a45252f9deae9b461d7bbc90ea3821208a41d\"" Sep 13 00:02:03.476700 containerd[1714]: time="2025-09-13T00:02:03.476673105Z" level=info msg="StartContainer for \"b96945bd1cf1eb66411ca5b1299a45252f9deae9b461d7bbc90ea3821208a41d\"" Sep 13 00:02:03.505183 systemd[1]: Started cri-containerd-b96945bd1cf1eb66411ca5b1299a45252f9deae9b461d7bbc90ea3821208a41d.scope - libcontainer container b96945bd1cf1eb66411ca5b1299a45252f9deae9b461d7bbc90ea3821208a41d. Sep 13 00:02:03.539533 containerd[1714]: time="2025-09-13T00:02:03.539413174Z" level=info msg="StartContainer for \"b96945bd1cf1eb66411ca5b1299a45252f9deae9b461d7bbc90ea3821208a41d\" returns successfully" Sep 13 00:02:04.423287 kubelet[3139]: I0913 00:02:04.422997 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-84d5fc849c-x9856" podStartSLOduration=1.657124457 podStartE2EDuration="3.422980128s" podCreationTimestamp="2025-09-13 00:02:01 +0000 UTC" firstStartedPulling="2025-09-13 00:02:01.655842389 +0000 UTC m=+22.461523692" lastFinishedPulling="2025-09-13 00:02:03.42169806 +0000 UTC m=+24.227379363" observedRunningTime="2025-09-13 00:02:04.42191597 +0000 UTC m=+25.227597273" watchObservedRunningTime="2025-09-13 00:02:04.422980128 +0000 UTC m=+25.228661431" Sep 13 00:02:04.437346 kubelet[3139]: E0913 00:02:04.437224 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.437346 kubelet[3139]: W0913 00:02:04.437246 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.437346 kubelet[3139]: E0913 00:02:04.437269 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.437667 kubelet[3139]: E0913 00:02:04.437428 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.437667 kubelet[3139]: W0913 00:02:04.437436 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.437667 kubelet[3139]: E0913 00:02:04.437445 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.437880 kubelet[3139]: E0913 00:02:04.437781 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.437880 kubelet[3139]: W0913 00:02:04.437793 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.437880 kubelet[3139]: E0913 00:02:04.437802 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.438056 kubelet[3139]: E0913 00:02:04.438042 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.438215 kubelet[3139]: W0913 00:02:04.438113 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.438215 kubelet[3139]: E0913 00:02:04.438129 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.438346 kubelet[3139]: E0913 00:02:04.438336 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.438402 kubelet[3139]: W0913 00:02:04.438391 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.438524 kubelet[3139]: E0913 00:02:04.438447 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.438625 kubelet[3139]: E0913 00:02:04.438615 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.438772 kubelet[3139]: W0913 00:02:04.438676 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.438772 kubelet[3139]: E0913 00:02:04.438691 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.438904 kubelet[3139]: E0913 00:02:04.438893 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.438954 kubelet[3139]: W0913 00:02:04.438944 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.439096 kubelet[3139]: E0913 00:02:04.439000 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.439207 kubelet[3139]: E0913 00:02:04.439196 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.439274 kubelet[3139]: W0913 00:02:04.439263 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.439327 kubelet[3139]: E0913 00:02:04.439318 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.439613 kubelet[3139]: E0913 00:02:04.439529 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.439613 kubelet[3139]: W0913 00:02:04.439539 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.439613 kubelet[3139]: E0913 00:02:04.439549 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.439772 kubelet[3139]: E0913 00:02:04.439761 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.439832 kubelet[3139]: W0913 00:02:04.439821 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.439954 kubelet[3139]: E0913 00:02:04.439875 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.440080 kubelet[3139]: E0913 00:02:04.440070 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.440144 kubelet[3139]: W0913 00:02:04.440133 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.440202 kubelet[3139]: E0913 00:02:04.440192 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.440483 kubelet[3139]: E0913 00:02:04.440389 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.440483 kubelet[3139]: W0913 00:02:04.440399 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.440483 kubelet[3139]: E0913 00:02:04.440409 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.440642 kubelet[3139]: E0913 00:02:04.440632 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.440693 kubelet[3139]: W0913 00:02:04.440683 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.440752 kubelet[3139]: E0913 00:02:04.440742 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.441055 kubelet[3139]: E0913 00:02:04.440951 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.441055 kubelet[3139]: W0913 00:02:04.440962 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.441055 kubelet[3139]: E0913 00:02:04.440972 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.441370 kubelet[3139]: E0913 00:02:04.441294 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.441370 kubelet[3139]: W0913 00:02:04.441306 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.441370 kubelet[3139]: E0913 00:02:04.441317 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.452464 kubelet[3139]: E0913 00:02:04.452440 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.452464 kubelet[3139]: W0913 00:02:04.452459 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.452645 kubelet[3139]: E0913 00:02:04.452474 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.452645 kubelet[3139]: E0913 00:02:04.452638 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.452707 kubelet[3139]: W0913 00:02:04.452646 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.452707 kubelet[3139]: E0913 00:02:04.452663 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.452835 kubelet[3139]: E0913 00:02:04.452822 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.452835 kubelet[3139]: W0913 00:02:04.452834 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.452902 kubelet[3139]: E0913 00:02:04.452847 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.453043 kubelet[3139]: E0913 00:02:04.453029 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.453043 kubelet[3139]: W0913 00:02:04.453041 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.453125 kubelet[3139]: E0913 00:02:04.453056 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.453217 kubelet[3139]: E0913 00:02:04.453203 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.453217 kubelet[3139]: W0913 00:02:04.453214 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.453295 kubelet[3139]: E0913 00:02:04.453227 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.453364 kubelet[3139]: E0913 00:02:04.453352 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.453364 kubelet[3139]: W0913 00:02:04.453362 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.453414 kubelet[3139]: E0913 00:02:04.453375 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.453529 kubelet[3139]: E0913 00:02:04.453518 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.453529 kubelet[3139]: W0913 00:02:04.453528 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.453593 kubelet[3139]: E0913 00:02:04.453541 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.453871 kubelet[3139]: E0913 00:02:04.453791 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.453871 kubelet[3139]: W0913 00:02:04.453806 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.453871 kubelet[3139]: E0913 00:02:04.453828 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.454226 kubelet[3139]: E0913 00:02:04.454129 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.454226 kubelet[3139]: W0913 00:02:04.454143 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.454226 kubelet[3139]: E0913 00:02:04.454171 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.454496 kubelet[3139]: E0913 00:02:04.454409 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.454496 kubelet[3139]: W0913 00:02:04.454433 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.454496 kubelet[3139]: E0913 00:02:04.454457 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.454824 kubelet[3139]: E0913 00:02:04.454743 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.454824 kubelet[3139]: W0913 00:02:04.454754 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.454824 kubelet[3139]: E0913 00:02:04.454771 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.455152 kubelet[3139]: E0913 00:02:04.455140 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.455288 kubelet[3139]: W0913 00:02:04.455202 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.455288 kubelet[3139]: E0913 00:02:04.455227 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.455505 kubelet[3139]: E0913 00:02:04.455493 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.455763 kubelet[3139]: W0913 00:02:04.455596 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.455763 kubelet[3139]: E0913 00:02:04.455619 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.455860 kubelet[3139]: E0913 00:02:04.455837 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.455860 kubelet[3139]: W0913 00:02:04.455854 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.455917 kubelet[3139]: E0913 00:02:04.455866 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.456040 kubelet[3139]: E0913 00:02:04.456001 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.456040 kubelet[3139]: W0913 00:02:04.456014 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.456040 kubelet[3139]: E0913 00:02:04.456040 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.456291 kubelet[3139]: E0913 00:02:04.456195 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.456291 kubelet[3139]: W0913 00:02:04.456209 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.456291 kubelet[3139]: E0913 00:02:04.456219 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.456612 kubelet[3139]: E0913 00:02:04.456491 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.456612 kubelet[3139]: W0913 00:02:04.456504 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.456612 kubelet[3139]: E0913 00:02:04.456520 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.456838 kubelet[3139]: E0913 00:02:04.456793 3139 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:02:04.456838 kubelet[3139]: W0913 00:02:04.456806 3139 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:02:04.456838 kubelet[3139]: E0913 00:02:04.456819 3139 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:02:04.626066 containerd[1714]: time="2025-09-13T00:02:04.626005624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:04.630825 containerd[1714]: time="2025-09-13T00:02:04.630790294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 13 00:02:04.635529 containerd[1714]: time="2025-09-13T00:02:04.635491884Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:04.641048 containerd[1714]: time="2025-09-13T00:02:04.640574233Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:04.641139 containerd[1714]: time="2025-09-13T00:02:04.641009032Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.218881053s" Sep 13 00:02:04.641176 containerd[1714]: time="2025-09-13T00:02:04.641143712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 13 00:02:04.646178 containerd[1714]: time="2025-09-13T00:02:04.646147141Z" level=info msg="CreateContainer within sandbox \"f71219d51b303f89aff72a54bb6b397d667eed3fec3f920d12b1e2a24444a3db\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:02:04.699777 containerd[1714]: time="2025-09-13T00:02:04.699730550Z" level=info msg="CreateContainer within sandbox \"f71219d51b303f89aff72a54bb6b397d667eed3fec3f920d12b1e2a24444a3db\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9b894273da568c38cccbdffa596d2ddcc71097bf9af97a0048c2831314126b01\"" Sep 13 00:02:04.700831 containerd[1714]: time="2025-09-13T00:02:04.700261548Z" level=info msg="StartContainer for \"9b894273da568c38cccbdffa596d2ddcc71097bf9af97a0048c2831314126b01\"" Sep 13 00:02:04.731181 systemd[1]: Started cri-containerd-9b894273da568c38cccbdffa596d2ddcc71097bf9af97a0048c2831314126b01.scope - libcontainer container 9b894273da568c38cccbdffa596d2ddcc71097bf9af97a0048c2831314126b01. Sep 13 00:02:04.764033 containerd[1714]: time="2025-09-13T00:02:04.763930735Z" level=info msg="StartContainer for \"9b894273da568c38cccbdffa596d2ddcc71097bf9af97a0048c2831314126b01\" returns successfully" Sep 13 00:02:04.772749 systemd[1]: cri-containerd-9b894273da568c38cccbdffa596d2ddcc71097bf9af97a0048c2831314126b01.scope: Deactivated successfully. Sep 13 00:02:04.794254 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b894273da568c38cccbdffa596d2ddcc71097bf9af97a0048c2831314126b01-rootfs.mount: Deactivated successfully. Sep 13 00:02:05.330439 kubelet[3139]: E0913 00:02:05.329283 3139 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xm8sj" podUID="a5fb5c00-ba33-4510-871e-6572e3bb79c8" Sep 13 00:02:05.410775 kubelet[3139]: I0913 00:02:05.409313 3139 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:02:05.803895 containerd[1714]: time="2025-09-13T00:02:05.803833563Z" level=info msg="shim disconnected" id=9b894273da568c38cccbdffa596d2ddcc71097bf9af97a0048c2831314126b01 namespace=k8s.io Sep 13 00:02:05.803895 containerd[1714]: time="2025-09-13T00:02:05.803901483Z" level=warning msg="cleaning up after shim disconnected" id=9b894273da568c38cccbdffa596d2ddcc71097bf9af97a0048c2831314126b01 namespace=k8s.io Sep 13 00:02:05.804301 containerd[1714]: time="2025-09-13T00:02:05.803910803Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:02:06.413453 containerd[1714]: time="2025-09-13T00:02:06.413342930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:02:07.329626 kubelet[3139]: E0913 00:02:07.329258 3139 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xm8sj" podUID="a5fb5c00-ba33-4510-871e-6572e3bb79c8" Sep 13 00:02:08.752194 containerd[1714]: time="2025-09-13T00:02:08.751406015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:08.754822 containerd[1714]: time="2025-09-13T00:02:08.754791729Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 13 00:02:08.760581 containerd[1714]: time="2025-09-13T00:02:08.760544520Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:08.765564 containerd[1714]: time="2025-09-13T00:02:08.765535351Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:08.766272 containerd[1714]: time="2025-09-13T00:02:08.766235830Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.352839981s" Sep 13 00:02:08.766272 containerd[1714]: time="2025-09-13T00:02:08.766269150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 13 00:02:08.768911 containerd[1714]: time="2025-09-13T00:02:08.768877626Z" level=info msg="CreateContainer within sandbox \"f71219d51b303f89aff72a54bb6b397d667eed3fec3f920d12b1e2a24444a3db\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:02:08.809992 containerd[1714]: time="2025-09-13T00:02:08.809943998Z" level=info msg="CreateContainer within sandbox \"f71219d51b303f89aff72a54bb6b397d667eed3fec3f920d12b1e2a24444a3db\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c7465fb1dcd0c24c901a65738fb9be6c871fd5cee33357fd7e52ef6a424a1378\"" Sep 13 00:02:08.811277 containerd[1714]: time="2025-09-13T00:02:08.811239275Z" level=info msg="StartContainer for \"c7465fb1dcd0c24c901a65738fb9be6c871fd5cee33357fd7e52ef6a424a1378\"" Sep 13 00:02:08.838083 systemd[1]: run-containerd-runc-k8s.io-c7465fb1dcd0c24c901a65738fb9be6c871fd5cee33357fd7e52ef6a424a1378-runc.iAI7ns.mount: Deactivated successfully. Sep 13 00:02:08.846165 systemd[1]: Started cri-containerd-c7465fb1dcd0c24c901a65738fb9be6c871fd5cee33357fd7e52ef6a424a1378.scope - libcontainer container c7465fb1dcd0c24c901a65738fb9be6c871fd5cee33357fd7e52ef6a424a1378. Sep 13 00:02:08.877709 containerd[1714]: time="2025-09-13T00:02:08.877634925Z" level=info msg="StartContainer for \"c7465fb1dcd0c24c901a65738fb9be6c871fd5cee33357fd7e52ef6a424a1378\" returns successfully" Sep 13 00:02:09.328912 kubelet[3139]: E0913 00:02:09.328570 3139 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xm8sj" podUID="a5fb5c00-ba33-4510-871e-6572e3bb79c8" Sep 13 00:02:10.090268 containerd[1714]: time="2025-09-13T00:02:10.090220588Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:02:10.092751 systemd[1]: cri-containerd-c7465fb1dcd0c24c901a65738fb9be6c871fd5cee33357fd7e52ef6a424a1378.scope: Deactivated successfully. Sep 13 00:02:10.112375 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7465fb1dcd0c24c901a65738fb9be6c871fd5cee33357fd7e52ef6a424a1378-rootfs.mount: Deactivated successfully. Sep 13 00:02:10.179779 kubelet[3139]: I0913 00:02:10.179751 3139 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:02:10.542099 kubelet[3139]: I0913 00:02:10.290089 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0aaf6e3b-2112-4be4-8bd2-d8200dc2d876-config-volume\") pod \"coredns-7c65d6cfc9-wm5tv\" (UID: \"0aaf6e3b-2112-4be4-8bd2-d8200dc2d876\") " pod="kube-system/coredns-7c65d6cfc9-wm5tv" Sep 13 00:02:10.542099 kubelet[3139]: I0913 00:02:10.290126 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43be8aa2-8f03-4619-8b2e-6d113822f84e-config\") pod \"goldmane-7988f88666-9gfvn\" (UID: \"43be8aa2-8f03-4619-8b2e-6d113822f84e\") " pod="calico-system/goldmane-7988f88666-9gfvn" Sep 13 00:02:10.542099 kubelet[3139]: I0913 00:02:10.290145 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/999a9d5f-3120-4ce9-9f3b-6230046e28b9-whisker-ca-bundle\") pod \"whisker-758bd4bf57-4k4lt\" (UID: \"999a9d5f-3120-4ce9-9f3b-6230046e28b9\") " pod="calico-system/whisker-758bd4bf57-4k4lt" Sep 13 00:02:10.542099 kubelet[3139]: I0913 00:02:10.290178 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8qmh\" (UniqueName: \"kubernetes.io/projected/43be8aa2-8f03-4619-8b2e-6d113822f84e-kube-api-access-b8qmh\") pod \"goldmane-7988f88666-9gfvn\" (UID: \"43be8aa2-8f03-4619-8b2e-6d113822f84e\") " pod="calico-system/goldmane-7988f88666-9gfvn" Sep 13 00:02:10.542099 kubelet[3139]: I0913 00:02:10.290194 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2mdl\" (UniqueName: \"kubernetes.io/projected/b2871a94-de4f-43dc-9c64-d0d7c69fe615-kube-api-access-l2mdl\") pod \"coredns-7c65d6cfc9-wfk44\" (UID: \"b2871a94-de4f-43dc-9c64-d0d7c69fe615\") " pod="kube-system/coredns-7c65d6cfc9-wfk44" Sep 13 00:02:10.228383 systemd[1]: Created slice kubepods-burstable-pod0aaf6e3b_2112_4be4_8bd2_d8200dc2d876.slice - libcontainer container kubepods-burstable-pod0aaf6e3b_2112_4be4_8bd2_d8200dc2d876.slice. Sep 13 00:02:10.542560 kubelet[3139]: I0913 00:02:10.291091 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjmxm\" (UniqueName: \"kubernetes.io/projected/6bd411cd-c359-4d2d-9237-39cca6617339-kube-api-access-jjmxm\") pod \"calico-apiserver-77b8b896d4-dwdsk\" (UID: \"6bd411cd-c359-4d2d-9237-39cca6617339\") " pod="calico-apiserver/calico-apiserver-77b8b896d4-dwdsk" Sep 13 00:02:10.542560 kubelet[3139]: I0913 00:02:10.291126 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8rjj\" (UniqueName: \"kubernetes.io/projected/c293651a-dfe1-4de8-a4dd-be90531f8a49-kube-api-access-v8rjj\") pod \"calico-apiserver-5dd67bc454-rxrtp\" (UID: \"c293651a-dfe1-4de8-a4dd-be90531f8a49\") " pod="calico-apiserver/calico-apiserver-5dd67bc454-rxrtp" Sep 13 00:02:10.542560 kubelet[3139]: I0913 00:02:10.291143 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab136b13-4df9-4d75-8b01-4675fa58dba1-tigera-ca-bundle\") pod \"calico-kube-controllers-8447f55f8b-9w8q9\" (UID: \"ab136b13-4df9-4d75-8b01-4675fa58dba1\") " pod="calico-system/calico-kube-controllers-8447f55f8b-9w8q9" Sep 13 00:02:10.542560 kubelet[3139]: I0913 00:02:10.291161 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6bww\" (UniqueName: \"kubernetes.io/projected/7951ccfd-1c72-4867-a215-dc5163f06c2d-kube-api-access-k6bww\") pod \"calico-apiserver-5dd67bc454-q6cd7\" (UID: \"7951ccfd-1c72-4867-a215-dc5163f06c2d\") " pod="calico-apiserver/calico-apiserver-5dd67bc454-q6cd7" Sep 13 00:02:10.542560 kubelet[3139]: I0913 00:02:10.291179 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj48p\" (UniqueName: \"kubernetes.io/projected/0aaf6e3b-2112-4be4-8bd2-d8200dc2d876-kube-api-access-jj48p\") pod \"coredns-7c65d6cfc9-wm5tv\" (UID: \"0aaf6e3b-2112-4be4-8bd2-d8200dc2d876\") " pod="kube-system/coredns-7c65d6cfc9-wm5tv" Sep 13 00:02:10.241840 systemd[1]: Created slice kubepods-besteffort-podab136b13_4df9_4d75_8b01_4675fa58dba1.slice - libcontainer container kubepods-besteffort-podab136b13_4df9_4d75_8b01_4675fa58dba1.slice. Sep 13 00:02:10.542707 kubelet[3139]: I0913 00:02:10.291197 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43be8aa2-8f03-4619-8b2e-6d113822f84e-goldmane-ca-bundle\") pod \"goldmane-7988f88666-9gfvn\" (UID: \"43be8aa2-8f03-4619-8b2e-6d113822f84e\") " pod="calico-system/goldmane-7988f88666-9gfvn" Sep 13 00:02:10.542707 kubelet[3139]: I0913 00:02:10.291225 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2871a94-de4f-43dc-9c64-d0d7c69fe615-config-volume\") pod \"coredns-7c65d6cfc9-wfk44\" (UID: \"b2871a94-de4f-43dc-9c64-d0d7c69fe615\") " pod="kube-system/coredns-7c65d6cfc9-wfk44" Sep 13 00:02:10.542707 kubelet[3139]: I0913 00:02:10.291268 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c293651a-dfe1-4de8-a4dd-be90531f8a49-calico-apiserver-certs\") pod \"calico-apiserver-5dd67bc454-rxrtp\" (UID: \"c293651a-dfe1-4de8-a4dd-be90531f8a49\") " pod="calico-apiserver/calico-apiserver-5dd67bc454-rxrtp" Sep 13 00:02:10.542707 kubelet[3139]: I0913 00:02:10.291295 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6bd411cd-c359-4d2d-9237-39cca6617339-calico-apiserver-certs\") pod \"calico-apiserver-77b8b896d4-dwdsk\" (UID: \"6bd411cd-c359-4d2d-9237-39cca6617339\") " pod="calico-apiserver/calico-apiserver-77b8b896d4-dwdsk" Sep 13 00:02:10.542707 kubelet[3139]: I0913 00:02:10.291320 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/999a9d5f-3120-4ce9-9f3b-6230046e28b9-whisker-backend-key-pair\") pod \"whisker-758bd4bf57-4k4lt\" (UID: \"999a9d5f-3120-4ce9-9f3b-6230046e28b9\") " pod="calico-system/whisker-758bd4bf57-4k4lt" Sep 13 00:02:10.250323 systemd[1]: Created slice kubepods-besteffort-pod7951ccfd_1c72_4867_a215_dc5163f06c2d.slice - libcontainer container kubepods-besteffort-pod7951ccfd_1c72_4867_a215_dc5163f06c2d.slice. Sep 13 00:02:10.542861 kubelet[3139]: I0913 00:02:10.291338 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxh4c\" (UniqueName: \"kubernetes.io/projected/999a9d5f-3120-4ce9-9f3b-6230046e28b9-kube-api-access-hxh4c\") pod \"whisker-758bd4bf57-4k4lt\" (UID: \"999a9d5f-3120-4ce9-9f3b-6230046e28b9\") " pod="calico-system/whisker-758bd4bf57-4k4lt" Sep 13 00:02:10.542861 kubelet[3139]: I0913 00:02:10.291355 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/43be8aa2-8f03-4619-8b2e-6d113822f84e-goldmane-key-pair\") pod \"goldmane-7988f88666-9gfvn\" (UID: \"43be8aa2-8f03-4619-8b2e-6d113822f84e\") " pod="calico-system/goldmane-7988f88666-9gfvn" Sep 13 00:02:10.542861 kubelet[3139]: I0913 00:02:10.291372 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5llz\" (UniqueName: \"kubernetes.io/projected/ab136b13-4df9-4d75-8b01-4675fa58dba1-kube-api-access-s5llz\") pod \"calico-kube-controllers-8447f55f8b-9w8q9\" (UID: \"ab136b13-4df9-4d75-8b01-4675fa58dba1\") " pod="calico-system/calico-kube-controllers-8447f55f8b-9w8q9" Sep 13 00:02:10.542861 kubelet[3139]: I0913 00:02:10.291389 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7951ccfd-1c72-4867-a215-dc5163f06c2d-calico-apiserver-certs\") pod \"calico-apiserver-5dd67bc454-q6cd7\" (UID: \"7951ccfd-1c72-4867-a215-dc5163f06c2d\") " pod="calico-apiserver/calico-apiserver-5dd67bc454-q6cd7" Sep 13 00:02:10.257579 systemd[1]: Created slice kubepods-burstable-podb2871a94_de4f_43dc_9c64_d0d7c69fe615.slice - libcontainer container kubepods-burstable-podb2871a94_de4f_43dc_9c64_d0d7c69fe615.slice. Sep 13 00:02:10.264687 systemd[1]: Created slice kubepods-besteffort-podc293651a_dfe1_4de8_a4dd_be90531f8a49.slice - libcontainer container kubepods-besteffort-podc293651a_dfe1_4de8_a4dd_be90531f8a49.slice. Sep 13 00:02:10.273348 systemd[1]: Created slice kubepods-besteffort-pod6bd411cd_c359_4d2d_9237_39cca6617339.slice - libcontainer container kubepods-besteffort-pod6bd411cd_c359_4d2d_9237_39cca6617339.slice. Sep 13 00:02:10.280919 systemd[1]: Created slice kubepods-besteffort-pod43be8aa2_8f03_4619_8b2e_6d113822f84e.slice - libcontainer container kubepods-besteffort-pod43be8aa2_8f03_4619_8b2e_6d113822f84e.slice. Sep 13 00:02:10.286197 systemd[1]: Created slice kubepods-besteffort-pod999a9d5f_3120_4ce9_9f3b_6230046e28b9.slice - libcontainer container kubepods-besteffort-pod999a9d5f_3120_4ce9_9f3b_6230046e28b9.slice. Sep 13 00:02:10.846471 containerd[1714]: time="2025-09-13T00:02:10.846354770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wm5tv,Uid:0aaf6e3b-2112-4be4-8bd2-d8200dc2d876,Namespace:kube-system,Attempt:0,}" Sep 13 00:02:10.847311 containerd[1714]: time="2025-09-13T00:02:10.846354850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77b8b896d4-dwdsk,Uid:6bd411cd-c359-4d2d-9237-39cca6617339,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:02:10.847601 containerd[1714]: time="2025-09-13T00:02:10.847543168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-9gfvn,Uid:43be8aa2-8f03-4619-8b2e-6d113822f84e,Namespace:calico-system,Attempt:0,}" Sep 13 00:02:10.847758 containerd[1714]: time="2025-09-13T00:02:10.847541488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8447f55f8b-9w8q9,Uid:ab136b13-4df9-4d75-8b01-4675fa58dba1,Namespace:calico-system,Attempt:0,}" Sep 13 00:02:10.849297 containerd[1714]: time="2025-09-13T00:02:10.849264245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-758bd4bf57-4k4lt,Uid:999a9d5f-3120-4ce9-9f3b-6230046e28b9,Namespace:calico-system,Attempt:0,}" Sep 13 00:02:10.853378 containerd[1714]: time="2025-09-13T00:02:10.853209438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd67bc454-q6cd7,Uid:7951ccfd-1c72-4867-a215-dc5163f06c2d,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:02:10.861892 containerd[1714]: time="2025-09-13T00:02:10.861861224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd67bc454-rxrtp,Uid:c293651a-dfe1-4de8-a4dd-be90531f8a49,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:02:10.879948 containerd[1714]: time="2025-09-13T00:02:10.879886394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wfk44,Uid:b2871a94-de4f-43dc-9c64-d0d7c69fe615,Namespace:kube-system,Attempt:0,}" Sep 13 00:02:10.953833 containerd[1714]: time="2025-09-13T00:02:10.953631431Z" level=info msg="shim disconnected" id=c7465fb1dcd0c24c901a65738fb9be6c871fd5cee33357fd7e52ef6a424a1378 namespace=k8s.io Sep 13 00:02:10.953833 containerd[1714]: time="2025-09-13T00:02:10.953688511Z" level=warning msg="cleaning up after shim disconnected" id=c7465fb1dcd0c24c901a65738fb9be6c871fd5cee33357fd7e52ef6a424a1378 namespace=k8s.io Sep 13 00:02:10.953833 containerd[1714]: time="2025-09-13T00:02:10.953697431Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:02:11.288926 containerd[1714]: time="2025-09-13T00:02:11.288863273Z" level=error msg="Failed to destroy network for sandbox \"3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.289464 containerd[1714]: time="2025-09-13T00:02:11.289279553Z" level=error msg="encountered an error cleaning up failed sandbox \"3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.289464 containerd[1714]: time="2025-09-13T00:02:11.289333553Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77b8b896d4-dwdsk,Uid:6bd411cd-c359-4d2d-9237-39cca6617339,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.290856 kubelet[3139]: E0913 00:02:11.289672 3139 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.290856 kubelet[3139]: E0913 00:02:11.289740 3139 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77b8b896d4-dwdsk" Sep 13 00:02:11.290856 kubelet[3139]: E0913 00:02:11.289758 3139 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77b8b896d4-dwdsk" Sep 13 00:02:11.290986 kubelet[3139]: E0913 00:02:11.289802 3139 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77b8b896d4-dwdsk_calico-apiserver(6bd411cd-c359-4d2d-9237-39cca6617339)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77b8b896d4-dwdsk_calico-apiserver(6bd411cd-c359-4d2d-9237-39cca6617339)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77b8b896d4-dwdsk" podUID="6bd411cd-c359-4d2d-9237-39cca6617339" Sep 13 00:02:11.337356 systemd[1]: Created slice kubepods-besteffort-poda5fb5c00_ba33_4510_871e_6572e3bb79c8.slice - libcontainer container kubepods-besteffort-poda5fb5c00_ba33_4510_871e_6572e3bb79c8.slice. Sep 13 00:02:11.340609 containerd[1714]: time="2025-09-13T00:02:11.340334708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xm8sj,Uid:a5fb5c00-ba33-4510-871e-6572e3bb79c8,Namespace:calico-system,Attempt:0,}" Sep 13 00:02:11.403218 containerd[1714]: time="2025-09-13T00:02:11.403158923Z" level=error msg="Failed to destroy network for sandbox \"cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.404277 containerd[1714]: time="2025-09-13T00:02:11.404202481Z" level=error msg="encountered an error cleaning up failed sandbox \"cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.404366 containerd[1714]: time="2025-09-13T00:02:11.404275521Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-758bd4bf57-4k4lt,Uid:999a9d5f-3120-4ce9-9f3b-6230046e28b9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.404588 kubelet[3139]: E0913 00:02:11.404461 3139 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.404588 kubelet[3139]: E0913 00:02:11.404526 3139 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-758bd4bf57-4k4lt" Sep 13 00:02:11.404588 kubelet[3139]: E0913 00:02:11.404546 3139 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-758bd4bf57-4k4lt" Sep 13 00:02:11.404692 kubelet[3139]: E0913 00:02:11.404583 3139 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-758bd4bf57-4k4lt_calico-system(999a9d5f-3120-4ce9-9f3b-6230046e28b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-758bd4bf57-4k4lt_calico-system(999a9d5f-3120-4ce9-9f3b-6230046e28b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-758bd4bf57-4k4lt" podUID="999a9d5f-3120-4ce9-9f3b-6230046e28b9" Sep 13 00:02:11.422449 containerd[1714]: time="2025-09-13T00:02:11.422294451Z" level=error msg="Failed to destroy network for sandbox \"8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.424411 containerd[1714]: time="2025-09-13T00:02:11.424356768Z" level=error msg="encountered an error cleaning up failed sandbox \"8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.424580 containerd[1714]: time="2025-09-13T00:02:11.424557728Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-9gfvn,Uid:43be8aa2-8f03-4619-8b2e-6d113822f84e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.424913 kubelet[3139]: E0913 00:02:11.424875 3139 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.425043 kubelet[3139]: E0913 00:02:11.424929 3139 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-9gfvn" Sep 13 00:02:11.425043 kubelet[3139]: E0913 00:02:11.424948 3139 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-9gfvn" Sep 13 00:02:11.425043 kubelet[3139]: E0913 00:02:11.424999 3139 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-9gfvn_calico-system(43be8aa2-8f03-4619-8b2e-6d113822f84e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-9gfvn_calico-system(43be8aa2-8f03-4619-8b2e-6d113822f84e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-9gfvn" podUID="43be8aa2-8f03-4619-8b2e-6d113822f84e" Sep 13 00:02:11.427871 kubelet[3139]: I0913 00:02:11.427842 3139 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" Sep 13 00:02:11.428510 containerd[1714]: time="2025-09-13T00:02:11.428486321Z" level=info msg="StopPodSandbox for \"cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809\"" Sep 13 00:02:11.428729 containerd[1714]: time="2025-09-13T00:02:11.428710521Z" level=info msg="Ensure that sandbox cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809 in task-service has been cleanup successfully" Sep 13 00:02:11.434446 containerd[1714]: time="2025-09-13T00:02:11.434422391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:02:11.440988 kubelet[3139]: I0913 00:02:11.440965 3139 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" Sep 13 00:02:11.445754 containerd[1714]: time="2025-09-13T00:02:11.445633853Z" level=info msg="StopPodSandbox for \"8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd\"" Sep 13 00:02:11.445836 containerd[1714]: time="2025-09-13T00:02:11.445813052Z" level=info msg="Ensure that sandbox 8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd in task-service has been cleanup successfully" Sep 13 00:02:11.451795 containerd[1714]: time="2025-09-13T00:02:11.451455003Z" level=error msg="Failed to destroy network for sandbox \"807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.452142 containerd[1714]: time="2025-09-13T00:02:11.452104322Z" level=error msg="encountered an error cleaning up failed sandbox \"807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.452194 containerd[1714]: time="2025-09-13T00:02:11.452163122Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd67bc454-q6cd7,Uid:7951ccfd-1c72-4867-a215-dc5163f06c2d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.457532 kubelet[3139]: E0913 00:02:11.456980 3139 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.457532 kubelet[3139]: E0913 00:02:11.457222 3139 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dd67bc454-q6cd7" Sep 13 00:02:11.457532 kubelet[3139]: E0913 00:02:11.457265 3139 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dd67bc454-q6cd7" Sep 13 00:02:11.457676 kubelet[3139]: E0913 00:02:11.457307 3139 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5dd67bc454-q6cd7_calico-apiserver(7951ccfd-1c72-4867-a215-dc5163f06c2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5dd67bc454-q6cd7_calico-apiserver(7951ccfd-1c72-4867-a215-dc5163f06c2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dd67bc454-q6cd7" podUID="7951ccfd-1c72-4867-a215-dc5163f06c2d" Sep 13 00:02:11.465030 kubelet[3139]: I0913 00:02:11.463581 3139 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" Sep 13 00:02:11.466229 containerd[1714]: time="2025-09-13T00:02:11.466167538Z" level=info msg="StopPodSandbox for \"3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652\"" Sep 13 00:02:11.469417 containerd[1714]: time="2025-09-13T00:02:11.468778974Z" level=info msg="Ensure that sandbox 3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652 in task-service has been cleanup successfully" Sep 13 00:02:11.476751 containerd[1714]: time="2025-09-13T00:02:11.476692841Z" level=error msg="Failed to destroy network for sandbox \"1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.477079 containerd[1714]: time="2025-09-13T00:02:11.477047800Z" level=error msg="encountered an error cleaning up failed sandbox \"1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.477134 containerd[1714]: time="2025-09-13T00:02:11.477097520Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wfk44,Uid:b2871a94-de4f-43dc-9c64-d0d7c69fe615,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.477782 kubelet[3139]: E0913 00:02:11.477627 3139 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.477782 kubelet[3139]: E0913 00:02:11.477683 3139 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-wfk44" Sep 13 00:02:11.477782 kubelet[3139]: E0913 00:02:11.477705 3139 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-wfk44" Sep 13 00:02:11.477903 kubelet[3139]: E0913 00:02:11.477743 3139 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-wfk44_kube-system(b2871a94-de4f-43dc-9c64-d0d7c69fe615)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-wfk44_kube-system(b2871a94-de4f-43dc-9c64-d0d7c69fe615)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-wfk44" podUID="b2871a94-de4f-43dc-9c64-d0d7c69fe615" Sep 13 00:02:11.480378 containerd[1714]: time="2025-09-13T00:02:11.480216475Z" level=error msg="Failed to destroy network for sandbox \"a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.481460 containerd[1714]: time="2025-09-13T00:02:11.481387273Z" level=error msg="encountered an error cleaning up failed sandbox \"a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.481610 containerd[1714]: time="2025-09-13T00:02:11.481479513Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wm5tv,Uid:0aaf6e3b-2112-4be4-8bd2-d8200dc2d876,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.481914 kubelet[3139]: E0913 00:02:11.481793 3139 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.481914 kubelet[3139]: E0913 00:02:11.481875 3139 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-wm5tv" Sep 13 00:02:11.481914 kubelet[3139]: E0913 00:02:11.481894 3139 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-wm5tv" Sep 13 00:02:11.482045 kubelet[3139]: E0913 00:02:11.481947 3139 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-wm5tv_kube-system(0aaf6e3b-2112-4be4-8bd2-d8200dc2d876)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-wm5tv_kube-system(0aaf6e3b-2112-4be4-8bd2-d8200dc2d876)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-wm5tv" podUID="0aaf6e3b-2112-4be4-8bd2-d8200dc2d876" Sep 13 00:02:11.489621 containerd[1714]: time="2025-09-13T00:02:11.489240460Z" level=error msg="Failed to destroy network for sandbox \"c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.490249 containerd[1714]: time="2025-09-13T00:02:11.490121419Z" level=error msg="encountered an error cleaning up failed sandbox \"c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.490249 containerd[1714]: time="2025-09-13T00:02:11.490182298Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8447f55f8b-9w8q9,Uid:ab136b13-4df9-4d75-8b01-4675fa58dba1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.490823 kubelet[3139]: E0913 00:02:11.490364 3139 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.490823 kubelet[3139]: E0913 00:02:11.490658 3139 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8447f55f8b-9w8q9" Sep 13 00:02:11.490823 kubelet[3139]: E0913 00:02:11.490675 3139 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8447f55f8b-9w8q9" Sep 13 00:02:11.491048 kubelet[3139]: E0913 00:02:11.490723 3139 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8447f55f8b-9w8q9_calico-system(ab136b13-4df9-4d75-8b01-4675fa58dba1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8447f55f8b-9w8q9_calico-system(ab136b13-4df9-4d75-8b01-4675fa58dba1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8447f55f8b-9w8q9" podUID="ab136b13-4df9-4d75-8b01-4675fa58dba1" Sep 13 00:02:11.499530 containerd[1714]: time="2025-09-13T00:02:11.499149924Z" level=error msg="Failed to destroy network for sandbox \"b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.501535 containerd[1714]: time="2025-09-13T00:02:11.501500000Z" level=error msg="encountered an error cleaning up failed sandbox \"b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.501812 containerd[1714]: time="2025-09-13T00:02:11.501693039Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd67bc454-rxrtp,Uid:c293651a-dfe1-4de8-a4dd-be90531f8a49,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.502324 kubelet[3139]: E0913 00:02:11.502149 3139 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.502324 kubelet[3139]: E0913 00:02:11.502197 3139 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dd67bc454-rxrtp" Sep 13 00:02:11.502324 kubelet[3139]: E0913 00:02:11.502214 3139 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dd67bc454-rxrtp" Sep 13 00:02:11.502673 kubelet[3139]: E0913 00:02:11.502246 3139 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5dd67bc454-rxrtp_calico-apiserver(c293651a-dfe1-4de8-a4dd-be90531f8a49)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5dd67bc454-rxrtp_calico-apiserver(c293651a-dfe1-4de8-a4dd-be90531f8a49)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dd67bc454-rxrtp" podUID="c293651a-dfe1-4de8-a4dd-be90531f8a49" Sep 13 00:02:11.538383 containerd[1714]: time="2025-09-13T00:02:11.538324298Z" level=error msg="StopPodSandbox for \"8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd\" failed" error="failed to destroy network for sandbox \"8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.538617 kubelet[3139]: E0913 00:02:11.538553 3139 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" Sep 13 00:02:11.538668 kubelet[3139]: E0913 00:02:11.538611 3139 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd"} Sep 13 00:02:11.538863 kubelet[3139]: E0913 00:02:11.538718 3139 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"43be8aa2-8f03-4619-8b2e-6d113822f84e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:02:11.538863 kubelet[3139]: E0913 00:02:11.538745 3139 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"43be8aa2-8f03-4619-8b2e-6d113822f84e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-9gfvn" podUID="43be8aa2-8f03-4619-8b2e-6d113822f84e" Sep 13 00:02:11.542858 containerd[1714]: time="2025-09-13T00:02:11.542455771Z" level=error msg="StopPodSandbox for \"cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809\" failed" error="failed to destroy network for sandbox \"cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.543640 kubelet[3139]: E0913 00:02:11.542674 3139 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" Sep 13 00:02:11.543640 kubelet[3139]: E0913 00:02:11.542728 3139 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809"} Sep 13 00:02:11.543640 kubelet[3139]: E0913 00:02:11.542758 3139 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"999a9d5f-3120-4ce9-9f3b-6230046e28b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:02:11.543640 kubelet[3139]: E0913 00:02:11.542778 3139 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"999a9d5f-3120-4ce9-9f3b-6230046e28b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-758bd4bf57-4k4lt" podUID="999a9d5f-3120-4ce9-9f3b-6230046e28b9" Sep 13 00:02:11.548558 containerd[1714]: time="2025-09-13T00:02:11.548402282Z" level=error msg="StopPodSandbox for \"3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652\" failed" error="failed to destroy network for sandbox \"3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.548713 kubelet[3139]: E0913 00:02:11.548666 3139 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" Sep 13 00:02:11.549892 kubelet[3139]: E0913 00:02:11.549858 3139 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652"} Sep 13 00:02:11.549947 kubelet[3139]: E0913 00:02:11.549922 3139 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6bd411cd-c359-4d2d-9237-39cca6617339\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:02:11.550013 kubelet[3139]: E0913 00:02:11.549945 3139 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6bd411cd-c359-4d2d-9237-39cca6617339\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77b8b896d4-dwdsk" podUID="6bd411cd-c359-4d2d-9237-39cca6617339" Sep 13 00:02:11.553760 containerd[1714]: time="2025-09-13T00:02:11.553648153Z" level=error msg="Failed to destroy network for sandbox \"9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.554065 containerd[1714]: time="2025-09-13T00:02:11.554006272Z" level=error msg="encountered an error cleaning up failed sandbox \"9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.554265 containerd[1714]: time="2025-09-13T00:02:11.554150512Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xm8sj,Uid:a5fb5c00-ba33-4510-871e-6572e3bb79c8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.554496 kubelet[3139]: E0913 00:02:11.554453 3139 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:11.554556 kubelet[3139]: E0913 00:02:11.554514 3139 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xm8sj" Sep 13 00:02:11.554556 kubelet[3139]: E0913 00:02:11.554533 3139 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xm8sj" Sep 13 00:02:11.554615 kubelet[3139]: E0913 00:02:11.554577 3139 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xm8sj_calico-system(a5fb5c00-ba33-4510-871e-6572e3bb79c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xm8sj_calico-system(a5fb5c00-ba33-4510-871e-6572e3bb79c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xm8sj" podUID="a5fb5c00-ba33-4510-871e-6572e3bb79c8" Sep 13 00:02:12.114340 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd-shm.mount: Deactivated successfully. Sep 13 00:02:12.114654 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe-shm.mount: Deactivated successfully. Sep 13 00:02:12.114791 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652-shm.mount: Deactivated successfully. Sep 13 00:02:12.466666 kubelet[3139]: I0913 00:02:12.466628 3139 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" Sep 13 00:02:12.467629 containerd[1714]: time="2025-09-13T00:02:12.467299873Z" level=info msg="StopPodSandbox for \"807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60\"" Sep 13 00:02:12.467629 containerd[1714]: time="2025-09-13T00:02:12.467472073Z" level=info msg="Ensure that sandbox 807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60 in task-service has been cleanup successfully" Sep 13 00:02:12.470131 kubelet[3139]: I0913 00:02:12.469675 3139 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" Sep 13 00:02:12.470408 containerd[1714]: time="2025-09-13T00:02:12.470377348Z" level=info msg="StopPodSandbox for \"b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b\"" Sep 13 00:02:12.470559 containerd[1714]: time="2025-09-13T00:02:12.470532267Z" level=info msg="Ensure that sandbox b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b in task-service has been cleanup successfully" Sep 13 00:02:12.475139 kubelet[3139]: I0913 00:02:12.475118 3139 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" Sep 13 00:02:12.476350 containerd[1714]: time="2025-09-13T00:02:12.476187138Z" level=info msg="StopPodSandbox for \"c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8\"" Sep 13 00:02:12.476774 kubelet[3139]: I0913 00:02:12.476754 3139 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" Sep 13 00:02:12.477039 containerd[1714]: time="2025-09-13T00:02:12.476994617Z" level=info msg="Ensure that sandbox c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8 in task-service has been cleanup successfully" Sep 13 00:02:12.481308 containerd[1714]: time="2025-09-13T00:02:12.481273610Z" level=info msg="StopPodSandbox for \"1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c\"" Sep 13 00:02:12.481548 containerd[1714]: time="2025-09-13T00:02:12.481412769Z" level=info msg="Ensure that sandbox 1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c in task-service has been cleanup successfully" Sep 13 00:02:12.482895 kubelet[3139]: I0913 00:02:12.482877 3139 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" Sep 13 00:02:12.483800 containerd[1714]: time="2025-09-13T00:02:12.483458046Z" level=info msg="StopPodSandbox for \"a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe\"" Sep 13 00:02:12.483800 containerd[1714]: time="2025-09-13T00:02:12.483597486Z" level=info msg="Ensure that sandbox a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe in task-service has been cleanup successfully" Sep 13 00:02:12.485958 kubelet[3139]: I0913 00:02:12.485936 3139 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" Sep 13 00:02:12.486969 containerd[1714]: time="2025-09-13T00:02:12.486938680Z" level=info msg="StopPodSandbox for \"9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b\"" Sep 13 00:02:12.487133 containerd[1714]: time="2025-09-13T00:02:12.487106480Z" level=info msg="Ensure that sandbox 9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b in task-service has been cleanup successfully" Sep 13 00:02:12.538703 containerd[1714]: time="2025-09-13T00:02:12.538312435Z" level=error msg="StopPodSandbox for \"807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60\" failed" error="failed to destroy network for sandbox \"807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:12.539861 kubelet[3139]: E0913 00:02:12.539638 3139 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" Sep 13 00:02:12.539861 kubelet[3139]: E0913 00:02:12.539683 3139 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60"} Sep 13 00:02:12.539861 kubelet[3139]: E0913 00:02:12.539719 3139 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7951ccfd-1c72-4867-a215-dc5163f06c2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:02:12.539861 kubelet[3139]: E0913 00:02:12.539795 3139 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7951ccfd-1c72-4867-a215-dc5163f06c2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dd67bc454-q6cd7" podUID="7951ccfd-1c72-4867-a215-dc5163f06c2d" Sep 13 00:02:12.552215 containerd[1714]: time="2025-09-13T00:02:12.552163332Z" level=error msg="StopPodSandbox for \"b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b\" failed" error="failed to destroy network for sandbox \"b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:12.552610 kubelet[3139]: E0913 00:02:12.552425 3139 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" Sep 13 00:02:12.552610 kubelet[3139]: E0913 00:02:12.552468 3139 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b"} Sep 13 00:02:12.552610 kubelet[3139]: E0913 00:02:12.552500 3139 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c293651a-dfe1-4de8-a4dd-be90531f8a49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:02:12.552610 kubelet[3139]: E0913 00:02:12.552527 3139 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c293651a-dfe1-4de8-a4dd-be90531f8a49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dd67bc454-rxrtp" podUID="c293651a-dfe1-4de8-a4dd-be90531f8a49" Sep 13 00:02:12.564618 containerd[1714]: time="2025-09-13T00:02:12.564011872Z" level=error msg="StopPodSandbox for \"1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c\" failed" error="failed to destroy network for sandbox \"1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:12.565443 containerd[1714]: time="2025-09-13T00:02:12.564655031Z" level=error msg="StopPodSandbox for \"a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe\" failed" error="failed to destroy network for sandbox \"a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:12.565873 kubelet[3139]: E0913 00:02:12.565639 3139 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" Sep 13 00:02:12.565873 kubelet[3139]: E0913 00:02:12.565686 3139 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c"} Sep 13 00:02:12.565873 kubelet[3139]: E0913 00:02:12.565727 3139 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b2871a94-de4f-43dc-9c64-d0d7c69fe615\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:02:12.565873 kubelet[3139]: E0913 00:02:12.565746 3139 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b2871a94-de4f-43dc-9c64-d0d7c69fe615\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-wfk44" podUID="b2871a94-de4f-43dc-9c64-d0d7c69fe615" Sep 13 00:02:12.566531 kubelet[3139]: E0913 00:02:12.565430 3139 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" Sep 13 00:02:12.566531 kubelet[3139]: E0913 00:02:12.566124 3139 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe"} Sep 13 00:02:12.566531 kubelet[3139]: E0913 00:02:12.566174 3139 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0aaf6e3b-2112-4be4-8bd2-d8200dc2d876\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:02:12.566531 kubelet[3139]: E0913 00:02:12.566192 3139 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0aaf6e3b-2112-4be4-8bd2-d8200dc2d876\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-wm5tv" podUID="0aaf6e3b-2112-4be4-8bd2-d8200dc2d876" Sep 13 00:02:12.569298 containerd[1714]: time="2025-09-13T00:02:12.568631664Z" level=error msg="StopPodSandbox for \"c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8\" failed" error="failed to destroy network for sandbox \"c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:12.569387 kubelet[3139]: E0913 00:02:12.569182 3139 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" Sep 13 00:02:12.569387 kubelet[3139]: E0913 00:02:12.569219 3139 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8"} Sep 13 00:02:12.569544 kubelet[3139]: E0913 00:02:12.569241 3139 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ab136b13-4df9-4d75-8b01-4675fa58dba1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:02:12.569544 kubelet[3139]: E0913 00:02:12.569496 3139 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ab136b13-4df9-4d75-8b01-4675fa58dba1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8447f55f8b-9w8q9" podUID="ab136b13-4df9-4d75-8b01-4675fa58dba1" Sep 13 00:02:12.574728 containerd[1714]: time="2025-09-13T00:02:12.574645574Z" level=error msg="StopPodSandbox for \"9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b\" failed" error="failed to destroy network for sandbox \"9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:02:12.574873 kubelet[3139]: E0913 00:02:12.574838 3139 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" Sep 13 00:02:12.574933 kubelet[3139]: E0913 00:02:12.574876 3139 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b"} Sep 13 00:02:12.574933 kubelet[3139]: E0913 00:02:12.574911 3139 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a5fb5c00-ba33-4510-871e-6572e3bb79c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:02:12.575053 kubelet[3139]: E0913 00:02:12.574931 3139 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a5fb5c00-ba33-4510-871e-6572e3bb79c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xm8sj" podUID="a5fb5c00-ba33-4510-871e-6572e3bb79c8" Sep 13 00:02:15.586228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1097063004.mount: Deactivated successfully. Sep 13 00:02:15.711857 containerd[1714]: time="2025-09-13T00:02:15.711783835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:15.717352 containerd[1714]: time="2025-09-13T00:02:15.717295706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 13 00:02:15.721323 containerd[1714]: time="2025-09-13T00:02:15.721276459Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:15.726852 containerd[1714]: time="2025-09-13T00:02:15.726802770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:15.727509 containerd[1714]: time="2025-09-13T00:02:15.727338889Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 4.292795178s" Sep 13 00:02:15.727509 containerd[1714]: time="2025-09-13T00:02:15.727374649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 13 00:02:15.744937 containerd[1714]: time="2025-09-13T00:02:15.741771345Z" level=info msg="CreateContainer within sandbox \"f71219d51b303f89aff72a54bb6b397d667eed3fec3f920d12b1e2a24444a3db\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:02:15.811554 containerd[1714]: time="2025-09-13T00:02:15.811500029Z" level=info msg="CreateContainer within sandbox \"f71219d51b303f89aff72a54bb6b397d667eed3fec3f920d12b1e2a24444a3db\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4855551216a01897c8de6708bc5f29b16724d86a13ef32f2901ce636d3f68141\"" Sep 13 00:02:15.815111 containerd[1714]: time="2025-09-13T00:02:15.812564507Z" level=info msg="StartContainer for \"4855551216a01897c8de6708bc5f29b16724d86a13ef32f2901ce636d3f68141\"" Sep 13 00:02:15.859326 systemd[1]: Started cri-containerd-4855551216a01897c8de6708bc5f29b16724d86a13ef32f2901ce636d3f68141.scope - libcontainer container 4855551216a01897c8de6708bc5f29b16724d86a13ef32f2901ce636d3f68141. Sep 13 00:02:15.904677 containerd[1714]: time="2025-09-13T00:02:15.904637194Z" level=info msg="StartContainer for \"4855551216a01897c8de6708bc5f29b16724d86a13ef32f2901ce636d3f68141\" returns successfully" Sep 13 00:02:16.427040 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:02:16.427205 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:02:16.524508 kubelet[3139]: I0913 00:02:16.524439 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2s4m4" podStartSLOduration=1.657951993 podStartE2EDuration="15.524419243s" podCreationTimestamp="2025-09-13 00:02:01 +0000 UTC" firstStartedPulling="2025-09-13 00:02:01.861787598 +0000 UTC m=+22.667468861" lastFinishedPulling="2025-09-13 00:02:15.728254808 +0000 UTC m=+36.533936111" observedRunningTime="2025-09-13 00:02:16.523263565 +0000 UTC m=+37.328944828" watchObservedRunningTime="2025-09-13 00:02:16.524419243 +0000 UTC m=+37.330100546" Sep 13 00:02:16.568868 containerd[1714]: time="2025-09-13T00:02:16.568398730Z" level=info msg="StopPodSandbox for \"cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809\"" Sep 13 00:02:16.735796 containerd[1714]: 2025-09-13 00:02:16.679 [INFO][4349] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" Sep 13 00:02:16.735796 containerd[1714]: 2025-09-13 00:02:16.679 [INFO][4349] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" iface="eth0" netns="/var/run/netns/cni-47028e41-cfdd-5910-905a-f9c459f61212" Sep 13 00:02:16.735796 containerd[1714]: 2025-09-13 00:02:16.679 [INFO][4349] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" iface="eth0" netns="/var/run/netns/cni-47028e41-cfdd-5910-905a-f9c459f61212" Sep 13 00:02:16.735796 containerd[1714]: 2025-09-13 00:02:16.680 [INFO][4349] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" iface="eth0" netns="/var/run/netns/cni-47028e41-cfdd-5910-905a-f9c459f61212" Sep 13 00:02:16.735796 containerd[1714]: 2025-09-13 00:02:16.680 [INFO][4349] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" Sep 13 00:02:16.735796 containerd[1714]: 2025-09-13 00:02:16.680 [INFO][4349] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" Sep 13 00:02:16.735796 containerd[1714]: 2025-09-13 00:02:16.711 [INFO][4359] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" HandleID="k8s-pod-network.cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" Workload="ci--4081.3.5--n--a13ccab244-k8s-whisker--758bd4bf57--4k4lt-eth0" Sep 13 00:02:16.735796 containerd[1714]: 2025-09-13 00:02:16.711 [INFO][4359] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:16.735796 containerd[1714]: 2025-09-13 00:02:16.711 [INFO][4359] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:16.735796 containerd[1714]: 2025-09-13 00:02:16.726 [WARNING][4359] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" HandleID="k8s-pod-network.cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" Workload="ci--4081.3.5--n--a13ccab244-k8s-whisker--758bd4bf57--4k4lt-eth0" Sep 13 00:02:16.735796 containerd[1714]: 2025-09-13 00:02:16.726 [INFO][4359] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" HandleID="k8s-pod-network.cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" Workload="ci--4081.3.5--n--a13ccab244-k8s-whisker--758bd4bf57--4k4lt-eth0" Sep 13 00:02:16.735796 containerd[1714]: 2025-09-13 00:02:16.730 [INFO][4359] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:16.735796 containerd[1714]: 2025-09-13 00:02:16.734 [INFO][4349] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" Sep 13 00:02:16.736637 containerd[1714]: time="2025-09-13T00:02:16.736183651Z" level=info msg="TearDown network for sandbox \"cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809\" successfully" Sep 13 00:02:16.736637 containerd[1714]: time="2025-09-13T00:02:16.736213131Z" level=info msg="StopPodSandbox for \"cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809\" returns successfully" Sep 13 00:02:16.741867 systemd[1]: run-netns-cni\x2d47028e41\x2dcfdd\x2d5910\x2d905a\x2df9c459f61212.mount: Deactivated successfully. Sep 13 00:02:16.845862 kubelet[3139]: I0913 00:02:16.845822 3139 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/999a9d5f-3120-4ce9-9f3b-6230046e28b9-whisker-ca-bundle\") pod \"999a9d5f-3120-4ce9-9f3b-6230046e28b9\" (UID: \"999a9d5f-3120-4ce9-9f3b-6230046e28b9\") " Sep 13 00:02:16.846002 kubelet[3139]: I0913 00:02:16.845877 3139 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxh4c\" (UniqueName: \"kubernetes.io/projected/999a9d5f-3120-4ce9-9f3b-6230046e28b9-kube-api-access-hxh4c\") pod \"999a9d5f-3120-4ce9-9f3b-6230046e28b9\" (UID: \"999a9d5f-3120-4ce9-9f3b-6230046e28b9\") " Sep 13 00:02:16.846002 kubelet[3139]: I0913 00:02:16.845897 3139 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/999a9d5f-3120-4ce9-9f3b-6230046e28b9-whisker-backend-key-pair\") pod \"999a9d5f-3120-4ce9-9f3b-6230046e28b9\" (UID: \"999a9d5f-3120-4ce9-9f3b-6230046e28b9\") " Sep 13 00:02:16.846493 kubelet[3139]: I0913 00:02:16.846455 3139 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/999a9d5f-3120-4ce9-9f3b-6230046e28b9-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "999a9d5f-3120-4ce9-9f3b-6230046e28b9" (UID: "999a9d5f-3120-4ce9-9f3b-6230046e28b9"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:02:16.853360 systemd[1]: var-lib-kubelet-pods-999a9d5f\x2d3120\x2d4ce9\x2d9f3b\x2d6230046e28b9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhxh4c.mount: Deactivated successfully. Sep 13 00:02:16.854957 kubelet[3139]: I0913 00:02:16.854311 3139 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/999a9d5f-3120-4ce9-9f3b-6230046e28b9-kube-api-access-hxh4c" (OuterVolumeSpecName: "kube-api-access-hxh4c") pod "999a9d5f-3120-4ce9-9f3b-6230046e28b9" (UID: "999a9d5f-3120-4ce9-9f3b-6230046e28b9"). InnerVolumeSpecName "kube-api-access-hxh4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:02:16.858125 systemd[1]: var-lib-kubelet-pods-999a9d5f\x2d3120\x2d4ce9\x2d9f3b\x2d6230046e28b9-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:02:16.858788 kubelet[3139]: I0913 00:02:16.858485 3139 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/999a9d5f-3120-4ce9-9f3b-6230046e28b9-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "999a9d5f-3120-4ce9-9f3b-6230046e28b9" (UID: "999a9d5f-3120-4ce9-9f3b-6230046e28b9"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:02:16.947225 kubelet[3139]: I0913 00:02:16.947174 3139 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/999a9d5f-3120-4ce9-9f3b-6230046e28b9-whisker-ca-bundle\") on node \"ci-4081.3.5-n-a13ccab244\" DevicePath \"\"" Sep 13 00:02:16.947225 kubelet[3139]: I0913 00:02:16.947212 3139 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxh4c\" (UniqueName: \"kubernetes.io/projected/999a9d5f-3120-4ce9-9f3b-6230046e28b9-kube-api-access-hxh4c\") on node \"ci-4081.3.5-n-a13ccab244\" DevicePath \"\"" Sep 13 00:02:16.947225 kubelet[3139]: I0913 00:02:16.947223 3139 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/999a9d5f-3120-4ce9-9f3b-6230046e28b9-whisker-backend-key-pair\") on node \"ci-4081.3.5-n-a13ccab244\" DevicePath \"\"" Sep 13 00:02:17.337488 systemd[1]: Removed slice kubepods-besteffort-pod999a9d5f_3120_4ce9_9f3b_6230046e28b9.slice - libcontainer container kubepods-besteffort-pod999a9d5f_3120_4ce9_9f3b_6230046e28b9.slice. Sep 13 00:02:17.592442 systemd[1]: Created slice kubepods-besteffort-pod8d8428d9_ce71_4c79_b3fa_30368c96aed6.slice - libcontainer container kubepods-besteffort-pod8d8428d9_ce71_4c79_b3fa_30368c96aed6.slice. Sep 13 00:02:17.651337 kubelet[3139]: I0913 00:02:17.651222 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plmhh\" (UniqueName: \"kubernetes.io/projected/8d8428d9-ce71-4c79-b3fa-30368c96aed6-kube-api-access-plmhh\") pod \"whisker-84f89dcdd8-dzzx8\" (UID: \"8d8428d9-ce71-4c79-b3fa-30368c96aed6\") " pod="calico-system/whisker-84f89dcdd8-dzzx8" Sep 13 00:02:17.651337 kubelet[3139]: I0913 00:02:17.651266 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d8428d9-ce71-4c79-b3fa-30368c96aed6-whisker-ca-bundle\") pod \"whisker-84f89dcdd8-dzzx8\" (UID: \"8d8428d9-ce71-4c79-b3fa-30368c96aed6\") " pod="calico-system/whisker-84f89dcdd8-dzzx8" Sep 13 00:02:17.651337 kubelet[3139]: I0913 00:02:17.651285 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8d8428d9-ce71-4c79-b3fa-30368c96aed6-whisker-backend-key-pair\") pod \"whisker-84f89dcdd8-dzzx8\" (UID: \"8d8428d9-ce71-4c79-b3fa-30368c96aed6\") " pod="calico-system/whisker-84f89dcdd8-dzzx8" Sep 13 00:02:17.899654 containerd[1714]: time="2025-09-13T00:02:17.899198316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84f89dcdd8-dzzx8,Uid:8d8428d9-ce71-4c79-b3fa-30368c96aed6,Namespace:calico-system,Attempt:0,}" Sep 13 00:02:18.147296 systemd-networkd[1567]: cali37cfc02a0bf: Link UP Sep 13 00:02:18.148413 systemd-networkd[1567]: cali37cfc02a0bf: Gained carrier Sep 13 00:02:18.176096 containerd[1714]: 2025-09-13 00:02:17.976 [INFO][4426] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:02:18.176096 containerd[1714]: 2025-09-13 00:02:17.996 [INFO][4426] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--a13ccab244-k8s-whisker--84f89dcdd8--dzzx8-eth0 whisker-84f89dcdd8- calico-system 8d8428d9-ce71-4c79-b3fa-30368c96aed6 928 0 2025-09-13 00:02:17 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:84f89dcdd8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.5-n-a13ccab244 whisker-84f89dcdd8-dzzx8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali37cfc02a0bf [] [] }} ContainerID="84a39002487d81a4036c7a2d5423ac1a6eb21c4a472b303bce5cf45b20a59b79" Namespace="calico-system" Pod="whisker-84f89dcdd8-dzzx8" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-whisker--84f89dcdd8--dzzx8-" Sep 13 00:02:18.176096 containerd[1714]: 2025-09-13 00:02:17.996 [INFO][4426] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="84a39002487d81a4036c7a2d5423ac1a6eb21c4a472b303bce5cf45b20a59b79" Namespace="calico-system" Pod="whisker-84f89dcdd8-dzzx8" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-whisker--84f89dcdd8--dzzx8-eth0" Sep 13 00:02:18.176096 containerd[1714]: 2025-09-13 00:02:18.029 [INFO][4455] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84a39002487d81a4036c7a2d5423ac1a6eb21c4a472b303bce5cf45b20a59b79" HandleID="k8s-pod-network.84a39002487d81a4036c7a2d5423ac1a6eb21c4a472b303bce5cf45b20a59b79" Workload="ci--4081.3.5--n--a13ccab244-k8s-whisker--84f89dcdd8--dzzx8-eth0" Sep 13 00:02:18.176096 containerd[1714]: 2025-09-13 00:02:18.029 [INFO][4455] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="84a39002487d81a4036c7a2d5423ac1a6eb21c4a472b303bce5cf45b20a59b79" HandleID="k8s-pod-network.84a39002487d81a4036c7a2d5423ac1a6eb21c4a472b303bce5cf45b20a59b79" Workload="ci--4081.3.5--n--a13ccab244-k8s-whisker--84f89dcdd8--dzzx8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d36c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-a13ccab244", "pod":"whisker-84f89dcdd8-dzzx8", "timestamp":"2025-09-13 00:02:18.029131339 +0000 UTC"}, Hostname:"ci-4081.3.5-n-a13ccab244", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:02:18.176096 containerd[1714]: 2025-09-13 00:02:18.029 [INFO][4455] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:18.176096 containerd[1714]: 2025-09-13 00:02:18.029 [INFO][4455] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:18.176096 containerd[1714]: 2025-09-13 00:02:18.029 [INFO][4455] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-a13ccab244' Sep 13 00:02:18.176096 containerd[1714]: 2025-09-13 00:02:18.039 [INFO][4455] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.84a39002487d81a4036c7a2d5423ac1a6eb21c4a472b303bce5cf45b20a59b79" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:18.176096 containerd[1714]: 2025-09-13 00:02:18.047 [INFO][4455] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:18.176096 containerd[1714]: 2025-09-13 00:02:18.055 [INFO][4455] ipam/ipam.go 511: Trying affinity for 192.168.38.128/26 host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:18.176096 containerd[1714]: 2025-09-13 00:02:18.057 [INFO][4455] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.128/26 host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:18.176096 containerd[1714]: 2025-09-13 00:02:18.059 [INFO][4455] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.128/26 host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:18.176096 containerd[1714]: 2025-09-13 00:02:18.059 [INFO][4455] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.38.128/26 handle="k8s-pod-network.84a39002487d81a4036c7a2d5423ac1a6eb21c4a472b303bce5cf45b20a59b79" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:18.176096 containerd[1714]: 2025-09-13 00:02:18.061 [INFO][4455] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.84a39002487d81a4036c7a2d5423ac1a6eb21c4a472b303bce5cf45b20a59b79 Sep 13 00:02:18.176096 containerd[1714]: 2025-09-13 00:02:18.066 [INFO][4455] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.38.128/26 handle="k8s-pod-network.84a39002487d81a4036c7a2d5423ac1a6eb21c4a472b303bce5cf45b20a59b79" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:18.176096 containerd[1714]: 2025-09-13 00:02:18.077 [INFO][4455] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.38.129/26] block=192.168.38.128/26 handle="k8s-pod-network.84a39002487d81a4036c7a2d5423ac1a6eb21c4a472b303bce5cf45b20a59b79" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:18.176096 containerd[1714]: 2025-09-13 00:02:18.077 [INFO][4455] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.129/26] handle="k8s-pod-network.84a39002487d81a4036c7a2d5423ac1a6eb21c4a472b303bce5cf45b20a59b79" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:18.176096 containerd[1714]: 2025-09-13 00:02:18.077 [INFO][4455] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:18.176096 containerd[1714]: 2025-09-13 00:02:18.077 [INFO][4455] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.38.129/26] IPv6=[] ContainerID="84a39002487d81a4036c7a2d5423ac1a6eb21c4a472b303bce5cf45b20a59b79" HandleID="k8s-pod-network.84a39002487d81a4036c7a2d5423ac1a6eb21c4a472b303bce5cf45b20a59b79" Workload="ci--4081.3.5--n--a13ccab244-k8s-whisker--84f89dcdd8--dzzx8-eth0" Sep 13 00:02:18.177281 containerd[1714]: 2025-09-13 00:02:18.080 [INFO][4426] cni-plugin/k8s.go 418: Populated endpoint ContainerID="84a39002487d81a4036c7a2d5423ac1a6eb21c4a472b303bce5cf45b20a59b79" Namespace="calico-system" Pod="whisker-84f89dcdd8-dzzx8" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-whisker--84f89dcdd8--dzzx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-whisker--84f89dcdd8--dzzx8-eth0", GenerateName:"whisker-84f89dcdd8-", Namespace:"calico-system", SelfLink:"", UID:"8d8428d9-ce71-4c79-b3fa-30368c96aed6", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 2, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84f89dcdd8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"", Pod:"whisker-84f89dcdd8-dzzx8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.38.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali37cfc02a0bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:18.177281 containerd[1714]: 2025-09-13 00:02:18.080 [INFO][4426] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.129/32] ContainerID="84a39002487d81a4036c7a2d5423ac1a6eb21c4a472b303bce5cf45b20a59b79" Namespace="calico-system" Pod="whisker-84f89dcdd8-dzzx8" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-whisker--84f89dcdd8--dzzx8-eth0" Sep 13 00:02:18.177281 containerd[1714]: 2025-09-13 00:02:18.080 [INFO][4426] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali37cfc02a0bf ContainerID="84a39002487d81a4036c7a2d5423ac1a6eb21c4a472b303bce5cf45b20a59b79" Namespace="calico-system" Pod="whisker-84f89dcdd8-dzzx8" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-whisker--84f89dcdd8--dzzx8-eth0" Sep 13 00:02:18.177281 containerd[1714]: 2025-09-13 00:02:18.150 [INFO][4426] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84a39002487d81a4036c7a2d5423ac1a6eb21c4a472b303bce5cf45b20a59b79" Namespace="calico-system" Pod="whisker-84f89dcdd8-dzzx8" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-whisker--84f89dcdd8--dzzx8-eth0" Sep 13 00:02:18.177281 containerd[1714]: 2025-09-13 00:02:18.151 [INFO][4426] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="84a39002487d81a4036c7a2d5423ac1a6eb21c4a472b303bce5cf45b20a59b79" Namespace="calico-system" Pod="whisker-84f89dcdd8-dzzx8" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-whisker--84f89dcdd8--dzzx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-whisker--84f89dcdd8--dzzx8-eth0", GenerateName:"whisker-84f89dcdd8-", Namespace:"calico-system", SelfLink:"", UID:"8d8428d9-ce71-4c79-b3fa-30368c96aed6", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 2, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84f89dcdd8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"84a39002487d81a4036c7a2d5423ac1a6eb21c4a472b303bce5cf45b20a59b79", Pod:"whisker-84f89dcdd8-dzzx8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.38.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali37cfc02a0bf", MAC:"12:b9:a5:14:48:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:18.177281 containerd[1714]: 2025-09-13 00:02:18.170 [INFO][4426] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="84a39002487d81a4036c7a2d5423ac1a6eb21c4a472b303bce5cf45b20a59b79" Namespace="calico-system" Pod="whisker-84f89dcdd8-dzzx8" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-whisker--84f89dcdd8--dzzx8-eth0" Sep 13 00:02:18.222008 containerd[1714]: time="2025-09-13T00:02:18.220092062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:02:18.222008 containerd[1714]: time="2025-09-13T00:02:18.220504501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:02:18.222008 containerd[1714]: time="2025-09-13T00:02:18.221436060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:18.222008 containerd[1714]: time="2025-09-13T00:02:18.221577539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:18.253288 systemd[1]: Started cri-containerd-84a39002487d81a4036c7a2d5423ac1a6eb21c4a472b303bce5cf45b20a59b79.scope - libcontainer container 84a39002487d81a4036c7a2d5423ac1a6eb21c4a472b303bce5cf45b20a59b79. Sep 13 00:02:18.302004 containerd[1714]: time="2025-09-13T00:02:18.301951726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84f89dcdd8-dzzx8,Uid:8d8428d9-ce71-4c79-b3fa-30368c96aed6,Namespace:calico-system,Attempt:0,} returns sandbox id \"84a39002487d81a4036c7a2d5423ac1a6eb21c4a472b303bce5cf45b20a59b79\"" Sep 13 00:02:18.304905 containerd[1714]: time="2025-09-13T00:02:18.304631481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:02:19.332544 kubelet[3139]: I0913 00:02:19.331644 3139 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="999a9d5f-3120-4ce9-9f3b-6230046e28b9" path="/var/lib/kubelet/pods/999a9d5f-3120-4ce9-9f3b-6230046e28b9/volumes" Sep 13 00:02:19.593146 containerd[1714]: time="2025-09-13T00:02:19.592554738Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:19.596526 containerd[1714]: time="2025-09-13T00:02:19.596479692Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 13 00:02:19.601571 containerd[1714]: time="2025-09-13T00:02:19.601518803Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:19.608215 containerd[1714]: time="2025-09-13T00:02:19.608158632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:19.609059 containerd[1714]: time="2025-09-13T00:02:19.608953351Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.30428343s" Sep 13 00:02:19.609059 containerd[1714]: time="2025-09-13T00:02:19.608983071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 13 00:02:19.612096 containerd[1714]: time="2025-09-13T00:02:19.612063306Z" level=info msg="CreateContainer within sandbox \"84a39002487d81a4036c7a2d5423ac1a6eb21c4a472b303bce5cf45b20a59b79\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:02:19.653712 containerd[1714]: time="2025-09-13T00:02:19.653666277Z" level=info msg="CreateContainer within sandbox \"84a39002487d81a4036c7a2d5423ac1a6eb21c4a472b303bce5cf45b20a59b79\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"3a6f781b20bfd409f1e25f25620da4098c74525dea2a300f1658712e2f599c00\"" Sep 13 00:02:19.654341 containerd[1714]: time="2025-09-13T00:02:19.654312676Z" level=info msg="StartContainer for \"3a6f781b20bfd409f1e25f25620da4098c74525dea2a300f1658712e2f599c00\"" Sep 13 00:02:19.686334 systemd[1]: Started cri-containerd-3a6f781b20bfd409f1e25f25620da4098c74525dea2a300f1658712e2f599c00.scope - libcontainer container 3a6f781b20bfd409f1e25f25620da4098c74525dea2a300f1658712e2f599c00. Sep 13 00:02:19.718618 containerd[1714]: time="2025-09-13T00:02:19.718571169Z" level=info msg="StartContainer for \"3a6f781b20bfd409f1e25f25620da4098c74525dea2a300f1658712e2f599c00\" returns successfully" Sep 13 00:02:19.720520 containerd[1714]: time="2025-09-13T00:02:19.720474405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:02:19.752148 systemd-networkd[1567]: cali37cfc02a0bf: Gained IPv6LL Sep 13 00:02:22.329238 containerd[1714]: time="2025-09-13T00:02:22.328905986Z" level=info msg="StopPodSandbox for \"3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652\"" Sep 13 00:02:22.407780 containerd[1714]: 2025-09-13 00:02:22.374 [INFO][4658] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" Sep 13 00:02:22.407780 containerd[1714]: 2025-09-13 00:02:22.374 [INFO][4658] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" iface="eth0" netns="/var/run/netns/cni-d7223402-43e4-31e7-698d-2d62c6125cc8" Sep 13 00:02:22.407780 containerd[1714]: 2025-09-13 00:02:22.375 [INFO][4658] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" iface="eth0" netns="/var/run/netns/cni-d7223402-43e4-31e7-698d-2d62c6125cc8" Sep 13 00:02:22.407780 containerd[1714]: 2025-09-13 00:02:22.375 [INFO][4658] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" iface="eth0" netns="/var/run/netns/cni-d7223402-43e4-31e7-698d-2d62c6125cc8" Sep 13 00:02:22.407780 containerd[1714]: 2025-09-13 00:02:22.375 [INFO][4658] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" Sep 13 00:02:22.407780 containerd[1714]: 2025-09-13 00:02:22.375 [INFO][4658] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" Sep 13 00:02:22.407780 containerd[1714]: 2025-09-13 00:02:22.393 [INFO][4665] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" HandleID="k8s-pod-network.3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--dwdsk-eth0" Sep 13 00:02:22.407780 containerd[1714]: 2025-09-13 00:02:22.394 [INFO][4665] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:22.407780 containerd[1714]: 2025-09-13 00:02:22.394 [INFO][4665] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:22.407780 containerd[1714]: 2025-09-13 00:02:22.402 [WARNING][4665] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" HandleID="k8s-pod-network.3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--dwdsk-eth0" Sep 13 00:02:22.407780 containerd[1714]: 2025-09-13 00:02:22.402 [INFO][4665] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" HandleID="k8s-pod-network.3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--dwdsk-eth0" Sep 13 00:02:22.407780 containerd[1714]: 2025-09-13 00:02:22.403 [INFO][4665] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:22.407780 containerd[1714]: 2025-09-13 00:02:22.405 [INFO][4658] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" Sep 13 00:02:22.407780 containerd[1714]: time="2025-09-13T00:02:22.407606975Z" level=info msg="TearDown network for sandbox \"3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652\" successfully" Sep 13 00:02:22.407780 containerd[1714]: time="2025-09-13T00:02:22.407638095Z" level=info msg="StopPodSandbox for \"3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652\" returns successfully" Sep 13 00:02:22.409197 containerd[1714]: time="2025-09-13T00:02:22.408397333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77b8b896d4-dwdsk,Uid:6bd411cd-c359-4d2d-9237-39cca6617339,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:02:22.411179 systemd[1]: run-netns-cni\x2dd7223402\x2d43e4\x2d31e7\x2d698d\x2d2d62c6125cc8.mount: Deactivated successfully. Sep 13 00:02:22.566322 systemd-networkd[1567]: cali395485cfe31: Link UP Sep 13 00:02:22.570279 systemd-networkd[1567]: cali395485cfe31: Gained carrier Sep 13 00:02:22.591130 containerd[1714]: 2025-09-13 00:02:22.477 [INFO][4672] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:02:22.591130 containerd[1714]: 2025-09-13 00:02:22.490 [INFO][4672] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--dwdsk-eth0 calico-apiserver-77b8b896d4- calico-apiserver 6bd411cd-c359-4d2d-9237-39cca6617339 947 0 2025-09-13 00:01:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77b8b896d4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-n-a13ccab244 calico-apiserver-77b8b896d4-dwdsk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali395485cfe31 [] [] }} ContainerID="f092d12b6c48736ed5e7263f33fe481b9a9ac48fce9bff2354295eae3237d8ef" Namespace="calico-apiserver" Pod="calico-apiserver-77b8b896d4-dwdsk" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--dwdsk-" Sep 13 00:02:22.591130 containerd[1714]: 2025-09-13 00:02:22.490 [INFO][4672] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f092d12b6c48736ed5e7263f33fe481b9a9ac48fce9bff2354295eae3237d8ef" Namespace="calico-apiserver" Pod="calico-apiserver-77b8b896d4-dwdsk" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--dwdsk-eth0" Sep 13 00:02:22.591130 containerd[1714]: 2025-09-13 00:02:22.512 [INFO][4683] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f092d12b6c48736ed5e7263f33fe481b9a9ac48fce9bff2354295eae3237d8ef" HandleID="k8s-pod-network.f092d12b6c48736ed5e7263f33fe481b9a9ac48fce9bff2354295eae3237d8ef" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--dwdsk-eth0" Sep 13 00:02:22.591130 containerd[1714]: 2025-09-13 00:02:22.512 [INFO][4683] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f092d12b6c48736ed5e7263f33fe481b9a9ac48fce9bff2354295eae3237d8ef" HandleID="k8s-pod-network.f092d12b6c48736ed5e7263f33fe481b9a9ac48fce9bff2354295eae3237d8ef" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--dwdsk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b190), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-n-a13ccab244", "pod":"calico-apiserver-77b8b896d4-dwdsk", "timestamp":"2025-09-13 00:02:22.512295961 +0000 UTC"}, Hostname:"ci-4081.3.5-n-a13ccab244", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:02:22.591130 containerd[1714]: 2025-09-13 00:02:22.512 [INFO][4683] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:22.591130 containerd[1714]: 2025-09-13 00:02:22.512 [INFO][4683] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:22.591130 containerd[1714]: 2025-09-13 00:02:22.512 [INFO][4683] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-a13ccab244' Sep 13 00:02:22.591130 containerd[1714]: 2025-09-13 00:02:22.523 [INFO][4683] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f092d12b6c48736ed5e7263f33fe481b9a9ac48fce9bff2354295eae3237d8ef" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:22.591130 containerd[1714]: 2025-09-13 00:02:22.527 [INFO][4683] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:22.591130 containerd[1714]: 2025-09-13 00:02:22.531 [INFO][4683] ipam/ipam.go 511: Trying affinity for 192.168.38.128/26 host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:22.591130 containerd[1714]: 2025-09-13 00:02:22.532 [INFO][4683] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.128/26 host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:22.591130 containerd[1714]: 2025-09-13 00:02:22.534 [INFO][4683] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.128/26 host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:22.591130 containerd[1714]: 2025-09-13 00:02:22.534 [INFO][4683] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.38.128/26 handle="k8s-pod-network.f092d12b6c48736ed5e7263f33fe481b9a9ac48fce9bff2354295eae3237d8ef" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:22.591130 containerd[1714]: 2025-09-13 00:02:22.536 [INFO][4683] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f092d12b6c48736ed5e7263f33fe481b9a9ac48fce9bff2354295eae3237d8ef Sep 13 00:02:22.591130 containerd[1714]: 2025-09-13 00:02:22.548 [INFO][4683] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.38.128/26 handle="k8s-pod-network.f092d12b6c48736ed5e7263f33fe481b9a9ac48fce9bff2354295eae3237d8ef" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:22.591130 containerd[1714]: 2025-09-13 00:02:22.557 [INFO][4683] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.38.130/26] block=192.168.38.128/26 handle="k8s-pod-network.f092d12b6c48736ed5e7263f33fe481b9a9ac48fce9bff2354295eae3237d8ef" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:22.591130 containerd[1714]: 2025-09-13 00:02:22.557 [INFO][4683] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.130/26] handle="k8s-pod-network.f092d12b6c48736ed5e7263f33fe481b9a9ac48fce9bff2354295eae3237d8ef" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:22.591130 containerd[1714]: 2025-09-13 00:02:22.557 [INFO][4683] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:22.591130 containerd[1714]: 2025-09-13 00:02:22.557 [INFO][4683] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.38.130/26] IPv6=[] ContainerID="f092d12b6c48736ed5e7263f33fe481b9a9ac48fce9bff2354295eae3237d8ef" HandleID="k8s-pod-network.f092d12b6c48736ed5e7263f33fe481b9a9ac48fce9bff2354295eae3237d8ef" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--dwdsk-eth0" Sep 13 00:02:22.592698 containerd[1714]: 2025-09-13 00:02:22.559 [INFO][4672] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f092d12b6c48736ed5e7263f33fe481b9a9ac48fce9bff2354295eae3237d8ef" Namespace="calico-apiserver" Pod="calico-apiserver-77b8b896d4-dwdsk" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--dwdsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--dwdsk-eth0", GenerateName:"calico-apiserver-77b8b896d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"6bd411cd-c359-4d2d-9237-39cca6617339", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 1, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77b8b896d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"", Pod:"calico-apiserver-77b8b896d4-dwdsk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali395485cfe31", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:22.592698 containerd[1714]: 2025-09-13 00:02:22.559 [INFO][4672] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.130/32] ContainerID="f092d12b6c48736ed5e7263f33fe481b9a9ac48fce9bff2354295eae3237d8ef" Namespace="calico-apiserver" Pod="calico-apiserver-77b8b896d4-dwdsk" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--dwdsk-eth0" Sep 13 00:02:22.592698 containerd[1714]: 2025-09-13 00:02:22.559 [INFO][4672] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali395485cfe31 ContainerID="f092d12b6c48736ed5e7263f33fe481b9a9ac48fce9bff2354295eae3237d8ef" Namespace="calico-apiserver" Pod="calico-apiserver-77b8b896d4-dwdsk" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--dwdsk-eth0" Sep 13 00:02:22.592698 containerd[1714]: 2025-09-13 00:02:22.571 [INFO][4672] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f092d12b6c48736ed5e7263f33fe481b9a9ac48fce9bff2354295eae3237d8ef" Namespace="calico-apiserver" Pod="calico-apiserver-77b8b896d4-dwdsk" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--dwdsk-eth0" Sep 13 00:02:22.592698 containerd[1714]: 2025-09-13 00:02:22.572 [INFO][4672] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f092d12b6c48736ed5e7263f33fe481b9a9ac48fce9bff2354295eae3237d8ef" Namespace="calico-apiserver" Pod="calico-apiserver-77b8b896d4-dwdsk" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--dwdsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--dwdsk-eth0", GenerateName:"calico-apiserver-77b8b896d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"6bd411cd-c359-4d2d-9237-39cca6617339", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 1, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77b8b896d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"f092d12b6c48736ed5e7263f33fe481b9a9ac48fce9bff2354295eae3237d8ef", Pod:"calico-apiserver-77b8b896d4-dwdsk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali395485cfe31", MAC:"02:82:62:39:99:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:22.592698 containerd[1714]: 2025-09-13 00:02:22.588 [INFO][4672] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f092d12b6c48736ed5e7263f33fe481b9a9ac48fce9bff2354295eae3237d8ef" Namespace="calico-apiserver" Pod="calico-apiserver-77b8b896d4-dwdsk" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--dwdsk-eth0" Sep 13 00:02:22.800577 containerd[1714]: time="2025-09-13T00:02:22.799879402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:02:22.800829 containerd[1714]: time="2025-09-13T00:02:22.800763441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:02:22.804026 containerd[1714]: time="2025-09-13T00:02:22.801692319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:22.804620 containerd[1714]: time="2025-09-13T00:02:22.804365795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:22.850185 systemd[1]: Started cri-containerd-f092d12b6c48736ed5e7263f33fe481b9a9ac48fce9bff2354295eae3237d8ef.scope - libcontainer container f092d12b6c48736ed5e7263f33fe481b9a9ac48fce9bff2354295eae3237d8ef. Sep 13 00:02:22.909440 containerd[1714]: time="2025-09-13T00:02:22.909330300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77b8b896d4-dwdsk,Uid:6bd411cd-c359-4d2d-9237-39cca6617339,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f092d12b6c48736ed5e7263f33fe481b9a9ac48fce9bff2354295eae3237d8ef\"" Sep 13 00:02:23.311973 containerd[1714]: time="2025-09-13T00:02:23.311191231Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:23.314626 containerd[1714]: time="2025-09-13T00:02:23.314602066Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 13 00:02:23.319357 containerd[1714]: time="2025-09-13T00:02:23.319334618Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:23.326286 containerd[1714]: time="2025-09-13T00:02:23.326240526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:23.327212 containerd[1714]: time="2025-09-13T00:02:23.327174845Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 3.60664852s" Sep 13 00:02:23.327283 containerd[1714]: time="2025-09-13T00:02:23.327211645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 13 00:02:23.330635 containerd[1714]: time="2025-09-13T00:02:23.330440079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:02:23.331719 containerd[1714]: time="2025-09-13T00:02:23.331693237Z" level=info msg="CreateContainer within sandbox \"84a39002487d81a4036c7a2d5423ac1a6eb21c4a472b303bce5cf45b20a59b79\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:02:23.395563 containerd[1714]: time="2025-09-13T00:02:23.395521251Z" level=info msg="CreateContainer within sandbox \"84a39002487d81a4036c7a2d5423ac1a6eb21c4a472b303bce5cf45b20a59b79\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c3e1caa92815552af8f930350ece3bbd70753ff54f8615d4f1cefcaebcdf96f5\"" Sep 13 00:02:23.396379 containerd[1714]: time="2025-09-13T00:02:23.396165890Z" level=info msg="StartContainer for \"c3e1caa92815552af8f930350ece3bbd70753ff54f8615d4f1cefcaebcdf96f5\"" Sep 13 00:02:23.409652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount751864734.mount: Deactivated successfully. Sep 13 00:02:23.429225 systemd[1]: Started cri-containerd-c3e1caa92815552af8f930350ece3bbd70753ff54f8615d4f1cefcaebcdf96f5.scope - libcontainer container c3e1caa92815552af8f930350ece3bbd70753ff54f8615d4f1cefcaebcdf96f5. Sep 13 00:02:23.468973 containerd[1714]: time="2025-09-13T00:02:23.468732129Z" level=info msg="StartContainer for \"c3e1caa92815552af8f930350ece3bbd70753ff54f8615d4f1cefcaebcdf96f5\" returns successfully" Sep 13 00:02:23.545159 kubelet[3139]: I0913 00:02:23.545064 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-84f89dcdd8-dzzx8" podStartSLOduration=1.519388484 podStartE2EDuration="6.545044882s" podCreationTimestamp="2025-09-13 00:02:17 +0000 UTC" firstStartedPulling="2025-09-13 00:02:18.303771683 +0000 UTC m=+39.109452986" lastFinishedPulling="2025-09-13 00:02:23.329428081 +0000 UTC m=+44.135109384" observedRunningTime="2025-09-13 00:02:23.543613205 +0000 UTC m=+44.349294548" watchObservedRunningTime="2025-09-13 00:02:23.545044882 +0000 UTC m=+44.350726185" Sep 13 00:02:24.040159 systemd-networkd[1567]: cali395485cfe31: Gained IPv6LL Sep 13 00:02:24.329895 containerd[1714]: time="2025-09-13T00:02:24.329509057Z" level=info msg="StopPodSandbox for \"b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b\"" Sep 13 00:02:24.330108 containerd[1714]: time="2025-09-13T00:02:24.330063576Z" level=info msg="StopPodSandbox for \"8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd\"" Sep 13 00:02:24.455210 containerd[1714]: 2025-09-13 00:02:24.398 [INFO][4845] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" Sep 13 00:02:24.455210 containerd[1714]: 2025-09-13 00:02:24.398 [INFO][4845] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" iface="eth0" netns="/var/run/netns/cni-98395455-1f8f-5820-7549-9bddc077673a" Sep 13 00:02:24.455210 containerd[1714]: 2025-09-13 00:02:24.399 [INFO][4845] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" iface="eth0" netns="/var/run/netns/cni-98395455-1f8f-5820-7549-9bddc077673a" Sep 13 00:02:24.455210 containerd[1714]: 2025-09-13 00:02:24.399 [INFO][4845] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" iface="eth0" netns="/var/run/netns/cni-98395455-1f8f-5820-7549-9bddc077673a" Sep 13 00:02:24.455210 containerd[1714]: 2025-09-13 00:02:24.399 [INFO][4845] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" Sep 13 00:02:24.455210 containerd[1714]: 2025-09-13 00:02:24.400 [INFO][4845] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" Sep 13 00:02:24.455210 containerd[1714]: 2025-09-13 00:02:24.435 [INFO][4857] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" HandleID="k8s-pod-network.b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0" Sep 13 00:02:24.455210 containerd[1714]: 2025-09-13 00:02:24.435 [INFO][4857] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:24.455210 containerd[1714]: 2025-09-13 00:02:24.435 [INFO][4857] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:24.455210 containerd[1714]: 2025-09-13 00:02:24.446 [WARNING][4857] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" HandleID="k8s-pod-network.b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0" Sep 13 00:02:24.455210 containerd[1714]: 2025-09-13 00:02:24.446 [INFO][4857] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" HandleID="k8s-pod-network.b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0" Sep 13 00:02:24.455210 containerd[1714]: 2025-09-13 00:02:24.448 [INFO][4857] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:24.455210 containerd[1714]: 2025-09-13 00:02:24.451 [INFO][4845] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" Sep 13 00:02:24.455814 containerd[1714]: time="2025-09-13T00:02:24.455625647Z" level=info msg="TearDown network for sandbox \"b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b\" successfully" Sep 13 00:02:24.455814 containerd[1714]: time="2025-09-13T00:02:24.455652367Z" level=info msg="StopPodSandbox for \"b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b\" returns successfully" Sep 13 00:02:24.461638 systemd[1]: run-netns-cni\x2d98395455\x2d1f8f\x2d5820\x2d7549\x2d9bddc077673a.mount: Deactivated successfully. Sep 13 00:02:24.462559 containerd[1714]: time="2025-09-13T00:02:24.462519636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd67bc454-rxrtp,Uid:c293651a-dfe1-4de8-a4dd-be90531f8a49,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:02:24.483416 containerd[1714]: 2025-09-13 00:02:24.429 [INFO][4844] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" Sep 13 00:02:24.483416 containerd[1714]: 2025-09-13 00:02:24.430 [INFO][4844] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" iface="eth0" netns="/var/run/netns/cni-c815710b-7e64-ea3c-0abf-343ba7c7393f" Sep 13 00:02:24.483416 containerd[1714]: 2025-09-13 00:02:24.430 [INFO][4844] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" iface="eth0" netns="/var/run/netns/cni-c815710b-7e64-ea3c-0abf-343ba7c7393f" Sep 13 00:02:24.483416 containerd[1714]: 2025-09-13 00:02:24.430 [INFO][4844] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" iface="eth0" netns="/var/run/netns/cni-c815710b-7e64-ea3c-0abf-343ba7c7393f" Sep 13 00:02:24.483416 containerd[1714]: 2025-09-13 00:02:24.430 [INFO][4844] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" Sep 13 00:02:24.483416 containerd[1714]: 2025-09-13 00:02:24.430 [INFO][4844] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" Sep 13 00:02:24.483416 containerd[1714]: 2025-09-13 00:02:24.468 [INFO][4863] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" HandleID="k8s-pod-network.8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" Workload="ci--4081.3.5--n--a13ccab244-k8s-goldmane--7988f88666--9gfvn-eth0" Sep 13 00:02:24.483416 containerd[1714]: 2025-09-13 00:02:24.468 [INFO][4863] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:24.483416 containerd[1714]: 2025-09-13 00:02:24.468 [INFO][4863] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:24.483416 containerd[1714]: 2025-09-13 00:02:24.478 [WARNING][4863] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" HandleID="k8s-pod-network.8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" Workload="ci--4081.3.5--n--a13ccab244-k8s-goldmane--7988f88666--9gfvn-eth0" Sep 13 00:02:24.483416 containerd[1714]: 2025-09-13 00:02:24.478 [INFO][4863] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" HandleID="k8s-pod-network.8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" Workload="ci--4081.3.5--n--a13ccab244-k8s-goldmane--7988f88666--9gfvn-eth0" Sep 13 00:02:24.483416 containerd[1714]: 2025-09-13 00:02:24.480 [INFO][4863] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:24.483416 containerd[1714]: 2025-09-13 00:02:24.481 [INFO][4844] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" Sep 13 00:02:24.486653 containerd[1714]: time="2025-09-13T00:02:24.483529361Z" level=info msg="TearDown network for sandbox \"8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd\" successfully" Sep 13 00:02:24.486653 containerd[1714]: time="2025-09-13T00:02:24.483556241Z" level=info msg="StopPodSandbox for \"8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd\" returns successfully" Sep 13 00:02:24.486653 containerd[1714]: time="2025-09-13T00:02:24.486219796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-9gfvn,Uid:43be8aa2-8f03-4619-8b2e-6d113822f84e,Namespace:calico-system,Attempt:1,}" Sep 13 00:02:24.485923 systemd[1]: run-netns-cni\x2dc815710b\x2d7e64\x2dea3c\x2d0abf\x2d343ba7c7393f.mount: Deactivated successfully. Sep 13 00:02:24.680110 systemd-networkd[1567]: cali31c167188ec: Link UP Sep 13 00:02:24.683009 systemd-networkd[1567]: cali31c167188ec: Gained carrier Sep 13 00:02:24.704508 containerd[1714]: 2025-09-13 00:02:24.565 [INFO][4871] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:02:24.704508 containerd[1714]: 2025-09-13 00:02:24.582 [INFO][4871] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0 calico-apiserver-5dd67bc454- calico-apiserver c293651a-dfe1-4de8-a4dd-be90531f8a49 966 0 2025-09-13 00:01:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5dd67bc454 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-n-a13ccab244 calico-apiserver-5dd67bc454-rxrtp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali31c167188ec [] [] }} ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" Namespace="calico-apiserver" Pod="calico-apiserver-5dd67bc454-rxrtp" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-" Sep 13 00:02:24.704508 containerd[1714]: 2025-09-13 00:02:24.582 [INFO][4871] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" Namespace="calico-apiserver" Pod="calico-apiserver-5dd67bc454-rxrtp" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0" Sep 13 00:02:24.704508 containerd[1714]: 2025-09-13 00:02:24.625 [INFO][4895] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" HandleID="k8s-pod-network.bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0" Sep 13 00:02:24.704508 containerd[1714]: 2025-09-13 00:02:24.625 [INFO][4895] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" HandleID="k8s-pod-network.bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3680), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-n-a13ccab244", "pod":"calico-apiserver-5dd67bc454-rxrtp", "timestamp":"2025-09-13 00:02:24.625224005 +0000 UTC"}, Hostname:"ci-4081.3.5-n-a13ccab244", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:02:24.704508 containerd[1714]: 2025-09-13 00:02:24.625 [INFO][4895] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:24.704508 containerd[1714]: 2025-09-13 00:02:24.626 [INFO][4895] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:24.704508 containerd[1714]: 2025-09-13 00:02:24.626 [INFO][4895] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-a13ccab244' Sep 13 00:02:24.704508 containerd[1714]: 2025-09-13 00:02:24.636 [INFO][4895] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:24.704508 containerd[1714]: 2025-09-13 00:02:24.642 [INFO][4895] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:24.704508 containerd[1714]: 2025-09-13 00:02:24.648 [INFO][4895] ipam/ipam.go 511: Trying affinity for 192.168.38.128/26 host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:24.704508 containerd[1714]: 2025-09-13 00:02:24.651 [INFO][4895] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.128/26 host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:24.704508 containerd[1714]: 2025-09-13 00:02:24.653 [INFO][4895] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.128/26 host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:24.704508 containerd[1714]: 2025-09-13 00:02:24.653 [INFO][4895] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.38.128/26 handle="k8s-pod-network.bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:24.704508 containerd[1714]: 2025-09-13 00:02:24.654 [INFO][4895] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40 Sep 13 00:02:24.704508 containerd[1714]: 2025-09-13 00:02:24.659 [INFO][4895] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.38.128/26 handle="k8s-pod-network.bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:24.704508 containerd[1714]: 2025-09-13 00:02:24.668 [INFO][4895] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.38.131/26] block=192.168.38.128/26 handle="k8s-pod-network.bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:24.704508 containerd[1714]: 2025-09-13 00:02:24.668 [INFO][4895] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.131/26] handle="k8s-pod-network.bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:24.704508 containerd[1714]: 2025-09-13 00:02:24.668 [INFO][4895] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:24.704508 containerd[1714]: 2025-09-13 00:02:24.669 [INFO][4895] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.38.131/26] IPv6=[] ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" HandleID="k8s-pod-network.bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0" Sep 13 00:02:24.705244 containerd[1714]: 2025-09-13 00:02:24.671 [INFO][4871] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" Namespace="calico-apiserver" Pod="calico-apiserver-5dd67bc454-rxrtp" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0", GenerateName:"calico-apiserver-5dd67bc454-", Namespace:"calico-apiserver", SelfLink:"", UID:"c293651a-dfe1-4de8-a4dd-be90531f8a49", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 1, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dd67bc454", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"", Pod:"calico-apiserver-5dd67bc454-rxrtp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali31c167188ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:24.705244 containerd[1714]: 2025-09-13 00:02:24.671 [INFO][4871] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.131/32] ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" Namespace="calico-apiserver" Pod="calico-apiserver-5dd67bc454-rxrtp" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0" Sep 13 00:02:24.705244 containerd[1714]: 2025-09-13 00:02:24.671 [INFO][4871] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali31c167188ec ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" Namespace="calico-apiserver" Pod="calico-apiserver-5dd67bc454-rxrtp" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0" Sep 13 00:02:24.705244 containerd[1714]: 2025-09-13 00:02:24.685 [INFO][4871] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" Namespace="calico-apiserver" Pod="calico-apiserver-5dd67bc454-rxrtp" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0" Sep 13 00:02:24.705244 containerd[1714]: 2025-09-13 00:02:24.686 [INFO][4871] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" Namespace="calico-apiserver" Pod="calico-apiserver-5dd67bc454-rxrtp" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0", GenerateName:"calico-apiserver-5dd67bc454-", Namespace:"calico-apiserver", SelfLink:"", UID:"c293651a-dfe1-4de8-a4dd-be90531f8a49", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 1, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dd67bc454", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40", Pod:"calico-apiserver-5dd67bc454-rxrtp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali31c167188ec", MAC:"82:c4:06:52:ff:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:24.705244 containerd[1714]: 2025-09-13 00:02:24.702 [INFO][4871] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" Namespace="calico-apiserver" Pod="calico-apiserver-5dd67bc454-rxrtp" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0" Sep 13 00:02:24.723738 containerd[1714]: time="2025-09-13T00:02:24.723591602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:02:24.723738 containerd[1714]: time="2025-09-13T00:02:24.723684442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:02:24.724179 containerd[1714]: time="2025-09-13T00:02:24.723754042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:24.724252 containerd[1714]: time="2025-09-13T00:02:24.724168481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:24.742244 systemd[1]: Started cri-containerd-bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40.scope - libcontainer container bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40. Sep 13 00:02:24.786633 systemd-networkd[1567]: cali0c7f4ded509: Link UP Sep 13 00:02:24.792553 containerd[1714]: time="2025-09-13T00:02:24.790668371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd67bc454-rxrtp,Uid:c293651a-dfe1-4de8-a4dd-be90531f8a49,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40\"" Sep 13 00:02:24.793541 systemd-networkd[1567]: cali0c7f4ded509: Gained carrier Sep 13 00:02:24.810971 containerd[1714]: 2025-09-13 00:02:24.578 [INFO][4881] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:02:24.810971 containerd[1714]: 2025-09-13 00:02:24.594 [INFO][4881] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--a13ccab244-k8s-goldmane--7988f88666--9gfvn-eth0 goldmane-7988f88666- calico-system 43be8aa2-8f03-4619-8b2e-6d113822f84e 967 0 2025-09-13 00:02:01 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.5-n-a13ccab244 goldmane-7988f88666-9gfvn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0c7f4ded509 [] [] }} ContainerID="45c1e6b471ae2216ed87136b7531057f2a88105a16f2764e27ae24711a07975a" Namespace="calico-system" Pod="goldmane-7988f88666-9gfvn" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-goldmane--7988f88666--9gfvn-" Sep 13 00:02:24.810971 containerd[1714]: 2025-09-13 00:02:24.594 [INFO][4881] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="45c1e6b471ae2216ed87136b7531057f2a88105a16f2764e27ae24711a07975a" Namespace="calico-system" Pod="goldmane-7988f88666-9gfvn" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-goldmane--7988f88666--9gfvn-eth0" Sep 13 00:02:24.810971 containerd[1714]: 2025-09-13 00:02:24.633 [INFO][4900] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="45c1e6b471ae2216ed87136b7531057f2a88105a16f2764e27ae24711a07975a" HandleID="k8s-pod-network.45c1e6b471ae2216ed87136b7531057f2a88105a16f2764e27ae24711a07975a" Workload="ci--4081.3.5--n--a13ccab244-k8s-goldmane--7988f88666--9gfvn-eth0" Sep 13 00:02:24.810971 containerd[1714]: 2025-09-13 00:02:24.633 [INFO][4900] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="45c1e6b471ae2216ed87136b7531057f2a88105a16f2764e27ae24711a07975a" HandleID="k8s-pod-network.45c1e6b471ae2216ed87136b7531057f2a88105a16f2764e27ae24711a07975a" Workload="ci--4081.3.5--n--a13ccab244-k8s-goldmane--7988f88666--9gfvn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024aff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-a13ccab244", "pod":"goldmane-7988f88666-9gfvn", "timestamp":"2025-09-13 00:02:24.633278352 +0000 UTC"}, Hostname:"ci-4081.3.5-n-a13ccab244", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:02:24.810971 containerd[1714]: 2025-09-13 00:02:24.633 [INFO][4900] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:24.810971 containerd[1714]: 2025-09-13 00:02:24.668 [INFO][4900] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:24.810971 containerd[1714]: 2025-09-13 00:02:24.669 [INFO][4900] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-a13ccab244' Sep 13 00:02:24.810971 containerd[1714]: 2025-09-13 00:02:24.739 [INFO][4900] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.45c1e6b471ae2216ed87136b7531057f2a88105a16f2764e27ae24711a07975a" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:24.810971 containerd[1714]: 2025-09-13 00:02:24.749 [INFO][4900] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:24.810971 containerd[1714]: 2025-09-13 00:02:24.754 [INFO][4900] ipam/ipam.go 511: Trying affinity for 192.168.38.128/26 host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:24.810971 containerd[1714]: 2025-09-13 00:02:24.757 [INFO][4900] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.128/26 host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:24.810971 containerd[1714]: 2025-09-13 00:02:24.760 [INFO][4900] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.128/26 host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:24.810971 containerd[1714]: 2025-09-13 00:02:24.761 [INFO][4900] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.38.128/26 handle="k8s-pod-network.45c1e6b471ae2216ed87136b7531057f2a88105a16f2764e27ae24711a07975a" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:24.810971 containerd[1714]: 2025-09-13 00:02:24.763 [INFO][4900] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.45c1e6b471ae2216ed87136b7531057f2a88105a16f2764e27ae24711a07975a Sep 13 00:02:24.810971 containerd[1714]: 2025-09-13 00:02:24.767 [INFO][4900] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.38.128/26 handle="k8s-pod-network.45c1e6b471ae2216ed87136b7531057f2a88105a16f2764e27ae24711a07975a" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:24.810971 containerd[1714]: 2025-09-13 00:02:24.779 [INFO][4900] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.38.132/26] block=192.168.38.128/26 handle="k8s-pod-network.45c1e6b471ae2216ed87136b7531057f2a88105a16f2764e27ae24711a07975a" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:24.810971 containerd[1714]: 2025-09-13 00:02:24.779 [INFO][4900] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.132/26] handle="k8s-pod-network.45c1e6b471ae2216ed87136b7531057f2a88105a16f2764e27ae24711a07975a" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:24.810971 containerd[1714]: 2025-09-13 00:02:24.779 [INFO][4900] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:24.810971 containerd[1714]: 2025-09-13 00:02:24.779 [INFO][4900] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.38.132/26] IPv6=[] ContainerID="45c1e6b471ae2216ed87136b7531057f2a88105a16f2764e27ae24711a07975a" HandleID="k8s-pod-network.45c1e6b471ae2216ed87136b7531057f2a88105a16f2764e27ae24711a07975a" Workload="ci--4081.3.5--n--a13ccab244-k8s-goldmane--7988f88666--9gfvn-eth0" Sep 13 00:02:24.811585 containerd[1714]: 2025-09-13 00:02:24.781 [INFO][4881] cni-plugin/k8s.go 418: Populated endpoint ContainerID="45c1e6b471ae2216ed87136b7531057f2a88105a16f2764e27ae24711a07975a" Namespace="calico-system" Pod="goldmane-7988f88666-9gfvn" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-goldmane--7988f88666--9gfvn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-goldmane--7988f88666--9gfvn-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"43be8aa2-8f03-4619-8b2e-6d113822f84e", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"", Pod:"goldmane-7988f88666-9gfvn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.38.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0c7f4ded509", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:24.811585 containerd[1714]: 2025-09-13 00:02:24.781 [INFO][4881] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.132/32] ContainerID="45c1e6b471ae2216ed87136b7531057f2a88105a16f2764e27ae24711a07975a" Namespace="calico-system" Pod="goldmane-7988f88666-9gfvn" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-goldmane--7988f88666--9gfvn-eth0" Sep 13 00:02:24.811585 containerd[1714]: 2025-09-13 00:02:24.781 [INFO][4881] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0c7f4ded509 ContainerID="45c1e6b471ae2216ed87136b7531057f2a88105a16f2764e27ae24711a07975a" Namespace="calico-system" Pod="goldmane-7988f88666-9gfvn" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-goldmane--7988f88666--9gfvn-eth0" Sep 13 00:02:24.811585 containerd[1714]: 2025-09-13 00:02:24.787 [INFO][4881] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="45c1e6b471ae2216ed87136b7531057f2a88105a16f2764e27ae24711a07975a" Namespace="calico-system" Pod="goldmane-7988f88666-9gfvn" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-goldmane--7988f88666--9gfvn-eth0" Sep 13 00:02:24.811585 containerd[1714]: 2025-09-13 00:02:24.787 [INFO][4881] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="45c1e6b471ae2216ed87136b7531057f2a88105a16f2764e27ae24711a07975a" Namespace="calico-system" Pod="goldmane-7988f88666-9gfvn" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-goldmane--7988f88666--9gfvn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-goldmane--7988f88666--9gfvn-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"43be8aa2-8f03-4619-8b2e-6d113822f84e", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"45c1e6b471ae2216ed87136b7531057f2a88105a16f2764e27ae24711a07975a", Pod:"goldmane-7988f88666-9gfvn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.38.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0c7f4ded509", MAC:"d2:b1:f1:d1:48:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:24.811585 containerd[1714]: 2025-09-13 00:02:24.806 [INFO][4881] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="45c1e6b471ae2216ed87136b7531057f2a88105a16f2764e27ae24711a07975a" Namespace="calico-system" Pod="goldmane-7988f88666-9gfvn" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-goldmane--7988f88666--9gfvn-eth0" Sep 13 00:02:24.830216 containerd[1714]: time="2025-09-13T00:02:24.830049586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:02:24.830216 containerd[1714]: time="2025-09-13T00:02:24.830104705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:02:24.830216 containerd[1714]: time="2025-09-13T00:02:24.830115345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:24.830455 containerd[1714]: time="2025-09-13T00:02:24.830192905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:24.845314 systemd[1]: Started cri-containerd-45c1e6b471ae2216ed87136b7531057f2a88105a16f2764e27ae24711a07975a.scope - libcontainer container 45c1e6b471ae2216ed87136b7531057f2a88105a16f2764e27ae24711a07975a. Sep 13 00:02:24.875983 containerd[1714]: time="2025-09-13T00:02:24.875944789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-9gfvn,Uid:43be8aa2-8f03-4619-8b2e-6d113822f84e,Namespace:calico-system,Attempt:1,} returns sandbox id \"45c1e6b471ae2216ed87136b7531057f2a88105a16f2764e27ae24711a07975a\"" Sep 13 00:02:25.553922 kubelet[3139]: I0913 00:02:25.553878 3139 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:02:26.152250 systemd-networkd[1567]: cali31c167188ec: Gained IPv6LL Sep 13 00:02:26.244062 containerd[1714]: time="2025-09-13T00:02:26.243630961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:26.247310 containerd[1714]: time="2025-09-13T00:02:26.247155755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 13 00:02:26.251755 containerd[1714]: time="2025-09-13T00:02:26.251427228Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:26.257184 containerd[1714]: time="2025-09-13T00:02:26.257154579Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:26.257849 containerd[1714]: time="2025-09-13T00:02:26.257813378Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 2.927345379s" Sep 13 00:02:26.257849 containerd[1714]: time="2025-09-13T00:02:26.257847218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 13 00:02:26.262132 containerd[1714]: time="2025-09-13T00:02:26.261933611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:02:26.275517 containerd[1714]: time="2025-09-13T00:02:26.275477828Z" level=info msg="CreateContainer within sandbox \"f092d12b6c48736ed5e7263f33fe481b9a9ac48fce9bff2354295eae3237d8ef\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:02:26.315776 containerd[1714]: time="2025-09-13T00:02:26.315652882Z" level=info msg="CreateContainer within sandbox \"f092d12b6c48736ed5e7263f33fe481b9a9ac48fce9bff2354295eae3237d8ef\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"292c625db8d51977b09a7c729076e7ddcda310942bd7b844335b60fb96c74ee1\"" Sep 13 00:02:26.323732 containerd[1714]: time="2025-09-13T00:02:26.322626750Z" level=info msg="StartContainer for \"292c625db8d51977b09a7c729076e7ddcda310942bd7b844335b60fb96c74ee1\"" Sep 13 00:02:26.329013 containerd[1714]: time="2025-09-13T00:02:26.328956420Z" level=info msg="StopPodSandbox for \"1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c\"" Sep 13 00:02:26.329508 containerd[1714]: time="2025-09-13T00:02:26.328956340Z" level=info msg="StopPodSandbox for \"9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b\"" Sep 13 00:02:26.381613 systemd[1]: Started cri-containerd-292c625db8d51977b09a7c729076e7ddcda310942bd7b844335b60fb96c74ee1.scope - libcontainer container 292c625db8d51977b09a7c729076e7ddcda310942bd7b844335b60fb96c74ee1. Sep 13 00:02:26.555744 containerd[1714]: time="2025-09-13T00:02:26.555391644Z" level=info msg="StartContainer for \"292c625db8d51977b09a7c729076e7ddcda310942bd7b844335b60fb96c74ee1\" returns successfully" Sep 13 00:02:26.579238 containerd[1714]: 2025-09-13 00:02:26.479 [INFO][5113] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" Sep 13 00:02:26.579238 containerd[1714]: 2025-09-13 00:02:26.479 [INFO][5113] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" iface="eth0" netns="/var/run/netns/cni-19de0776-b100-1689-0a96-685903ee14fe" Sep 13 00:02:26.579238 containerd[1714]: 2025-09-13 00:02:26.479 [INFO][5113] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" iface="eth0" netns="/var/run/netns/cni-19de0776-b100-1689-0a96-685903ee14fe" Sep 13 00:02:26.579238 containerd[1714]: 2025-09-13 00:02:26.481 [INFO][5113] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" iface="eth0" netns="/var/run/netns/cni-19de0776-b100-1689-0a96-685903ee14fe" Sep 13 00:02:26.579238 containerd[1714]: 2025-09-13 00:02:26.481 [INFO][5113] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" Sep 13 00:02:26.579238 containerd[1714]: 2025-09-13 00:02:26.481 [INFO][5113] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" Sep 13 00:02:26.579238 containerd[1714]: 2025-09-13 00:02:26.542 [INFO][5156] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" HandleID="k8s-pod-network.1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" Workload="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wfk44-eth0" Sep 13 00:02:26.579238 containerd[1714]: 2025-09-13 00:02:26.543 [INFO][5156] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:26.579238 containerd[1714]: 2025-09-13 00:02:26.543 [INFO][5156] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:26.579238 containerd[1714]: 2025-09-13 00:02:26.568 [WARNING][5156] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" HandleID="k8s-pod-network.1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" Workload="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wfk44-eth0" Sep 13 00:02:26.579238 containerd[1714]: 2025-09-13 00:02:26.568 [INFO][5156] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" HandleID="k8s-pod-network.1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" Workload="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wfk44-eth0" Sep 13 00:02:26.579238 containerd[1714]: 2025-09-13 00:02:26.571 [INFO][5156] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:26.579238 containerd[1714]: 2025-09-13 00:02:26.576 [INFO][5113] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" Sep 13 00:02:26.582677 containerd[1714]: time="2025-09-13T00:02:26.582530239Z" level=info msg="TearDown network for sandbox \"1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c\" successfully" Sep 13 00:02:26.582677 containerd[1714]: time="2025-09-13T00:02:26.582563439Z" level=info msg="StopPodSandbox for \"1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c\" returns successfully" Sep 13 00:02:26.583441 systemd[1]: run-netns-cni\x2d19de0776\x2db100\x2d1689\x2d0a96\x2d685903ee14fe.mount: Deactivated successfully. Sep 13 00:02:26.588496 containerd[1714]: time="2025-09-13T00:02:26.587575471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wfk44,Uid:b2871a94-de4f-43dc-9c64-d0d7c69fe615,Namespace:kube-system,Attempt:1,}" Sep 13 00:02:26.597362 containerd[1714]: 2025-09-13 00:02:26.458 [INFO][5107] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" Sep 13 00:02:26.597362 containerd[1714]: 2025-09-13 00:02:26.464 [INFO][5107] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" iface="eth0" netns="/var/run/netns/cni-0fb4285e-1591-58a3-b250-b175fb05f1df" Sep 13 00:02:26.597362 containerd[1714]: 2025-09-13 00:02:26.467 [INFO][5107] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" iface="eth0" netns="/var/run/netns/cni-0fb4285e-1591-58a3-b250-b175fb05f1df" Sep 13 00:02:26.597362 containerd[1714]: 2025-09-13 00:02:26.467 [INFO][5107] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" iface="eth0" netns="/var/run/netns/cni-0fb4285e-1591-58a3-b250-b175fb05f1df" Sep 13 00:02:26.597362 containerd[1714]: 2025-09-13 00:02:26.468 [INFO][5107] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" Sep 13 00:02:26.597362 containerd[1714]: 2025-09-13 00:02:26.469 [INFO][5107] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" Sep 13 00:02:26.597362 containerd[1714]: 2025-09-13 00:02:26.562 [INFO][5151] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" HandleID="k8s-pod-network.9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" Workload="ci--4081.3.5--n--a13ccab244-k8s-csi--node--driver--xm8sj-eth0" Sep 13 00:02:26.597362 containerd[1714]: 2025-09-13 00:02:26.565 [INFO][5151] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:26.597362 containerd[1714]: 2025-09-13 00:02:26.571 [INFO][5151] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:26.597362 containerd[1714]: 2025-09-13 00:02:26.591 [WARNING][5151] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" HandleID="k8s-pod-network.9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" Workload="ci--4081.3.5--n--a13ccab244-k8s-csi--node--driver--xm8sj-eth0" Sep 13 00:02:26.597362 containerd[1714]: 2025-09-13 00:02:26.591 [INFO][5151] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" HandleID="k8s-pod-network.9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" Workload="ci--4081.3.5--n--a13ccab244-k8s-csi--node--driver--xm8sj-eth0" Sep 13 00:02:26.597362 containerd[1714]: 2025-09-13 00:02:26.592 [INFO][5151] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:26.597362 containerd[1714]: 2025-09-13 00:02:26.594 [INFO][5107] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" Sep 13 00:02:26.598436 containerd[1714]: time="2025-09-13T00:02:26.597997973Z" level=info msg="TearDown network for sandbox \"9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b\" successfully" Sep 13 00:02:26.598436 containerd[1714]: time="2025-09-13T00:02:26.598049733Z" level=info msg="StopPodSandbox for \"9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b\" returns successfully" Sep 13 00:02:26.599556 containerd[1714]: time="2025-09-13T00:02:26.599425891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xm8sj,Uid:a5fb5c00-ba33-4510-871e-6572e3bb79c8,Namespace:calico-system,Attempt:1,}" Sep 13 00:02:26.600273 systemd-networkd[1567]: cali0c7f4ded509: Gained IPv6LL Sep 13 00:02:26.602536 systemd[1]: run-netns-cni\x2d0fb4285e\x2d1591\x2d58a3\x2db250\x2db175fb05f1df.mount: Deactivated successfully. Sep 13 00:02:26.612640 containerd[1714]: time="2025-09-13T00:02:26.612591789Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:26.629378 containerd[1714]: time="2025-09-13T00:02:26.629335722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:02:26.690697 containerd[1714]: time="2025-09-13T00:02:26.690526100Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 428.557849ms" Sep 13 00:02:26.690697 containerd[1714]: time="2025-09-13T00:02:26.690568220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 13 00:02:26.693057 containerd[1714]: time="2025-09-13T00:02:26.692428097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:02:26.694929 containerd[1714]: time="2025-09-13T00:02:26.694731813Z" level=info msg="CreateContainer within sandbox \"bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:02:26.733201 kubelet[3139]: I0913 00:02:26.733042 3139 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:02:26.759111 containerd[1714]: time="2025-09-13T00:02:26.759059306Z" level=info msg="CreateContainer within sandbox \"bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a48fa92ed2d33a6f68b3a8874798c2b6b683a8fe962927fc6886164fb4375d7b\"" Sep 13 00:02:26.762480 containerd[1714]: time="2025-09-13T00:02:26.760246464Z" level=info msg="StartContainer for \"a48fa92ed2d33a6f68b3a8874798c2b6b683a8fe962927fc6886164fb4375d7b\"" Sep 13 00:02:26.794188 systemd[1]: Started cri-containerd-a48fa92ed2d33a6f68b3a8874798c2b6b683a8fe962927fc6886164fb4375d7b.scope - libcontainer container a48fa92ed2d33a6f68b3a8874798c2b6b683a8fe962927fc6886164fb4375d7b. Sep 13 00:02:26.902624 containerd[1714]: time="2025-09-13T00:02:26.902518268Z" level=info msg="StartContainer for \"a48fa92ed2d33a6f68b3a8874798c2b6b683a8fe962927fc6886164fb4375d7b\" returns successfully" Sep 13 00:02:26.927258 systemd-networkd[1567]: calic15881fce0c: Link UP Sep 13 00:02:26.928363 systemd-networkd[1567]: calic15881fce0c: Gained carrier Sep 13 00:02:26.953651 containerd[1714]: 2025-09-13 00:02:26.727 [INFO][5181] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:02:26.953651 containerd[1714]: 2025-09-13 00:02:26.759 [INFO][5181] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wfk44-eth0 coredns-7c65d6cfc9- kube-system b2871a94-de4f-43dc-9c64-d0d7c69fe615 986 0 2025-09-13 00:01:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-n-a13ccab244 coredns-7c65d6cfc9-wfk44 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic15881fce0c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="dfbaf709bd2a86801e45083e16e9840b19339e55555063e1119bbcc555d8e333" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wfk44" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wfk44-" Sep 13 00:02:26.953651 containerd[1714]: 2025-09-13 00:02:26.760 [INFO][5181] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dfbaf709bd2a86801e45083e16e9840b19339e55555063e1119bbcc555d8e333" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wfk44" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wfk44-eth0" Sep 13 00:02:26.953651 containerd[1714]: 2025-09-13 00:02:26.841 [INFO][5231] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dfbaf709bd2a86801e45083e16e9840b19339e55555063e1119bbcc555d8e333" HandleID="k8s-pod-network.dfbaf709bd2a86801e45083e16e9840b19339e55555063e1119bbcc555d8e333" Workload="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wfk44-eth0" Sep 13 00:02:26.953651 containerd[1714]: 2025-09-13 00:02:26.841 [INFO][5231] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dfbaf709bd2a86801e45083e16e9840b19339e55555063e1119bbcc555d8e333" HandleID="k8s-pod-network.dfbaf709bd2a86801e45083e16e9840b19339e55555063e1119bbcc555d8e333" Workload="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wfk44-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3600), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-n-a13ccab244", "pod":"coredns-7c65d6cfc9-wfk44", "timestamp":"2025-09-13 00:02:26.84123477 +0000 UTC"}, Hostname:"ci-4081.3.5-n-a13ccab244", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:02:26.953651 containerd[1714]: 2025-09-13 00:02:26.841 [INFO][5231] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:26.953651 containerd[1714]: 2025-09-13 00:02:26.841 [INFO][5231] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:26.953651 containerd[1714]: 2025-09-13 00:02:26.841 [INFO][5231] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-a13ccab244' Sep 13 00:02:26.953651 containerd[1714]: 2025-09-13 00:02:26.856 [INFO][5231] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dfbaf709bd2a86801e45083e16e9840b19339e55555063e1119bbcc555d8e333" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:26.953651 containerd[1714]: 2025-09-13 00:02:26.861 [INFO][5231] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:26.953651 containerd[1714]: 2025-09-13 00:02:26.867 [INFO][5231] ipam/ipam.go 511: Trying affinity for 192.168.38.128/26 host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:26.953651 containerd[1714]: 2025-09-13 00:02:26.870 [INFO][5231] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.128/26 host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:26.953651 containerd[1714]: 2025-09-13 00:02:26.878 [INFO][5231] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.128/26 host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:26.953651 containerd[1714]: 2025-09-13 00:02:26.878 [INFO][5231] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.38.128/26 handle="k8s-pod-network.dfbaf709bd2a86801e45083e16e9840b19339e55555063e1119bbcc555d8e333" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:26.953651 containerd[1714]: 2025-09-13 00:02:26.880 [INFO][5231] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dfbaf709bd2a86801e45083e16e9840b19339e55555063e1119bbcc555d8e333 Sep 13 00:02:26.953651 containerd[1714]: 2025-09-13 00:02:26.887 [INFO][5231] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.38.128/26 handle="k8s-pod-network.dfbaf709bd2a86801e45083e16e9840b19339e55555063e1119bbcc555d8e333" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:26.953651 containerd[1714]: 2025-09-13 00:02:26.914 [INFO][5231] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.38.133/26] block=192.168.38.128/26 handle="k8s-pod-network.dfbaf709bd2a86801e45083e16e9840b19339e55555063e1119bbcc555d8e333" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:26.953651 containerd[1714]: 2025-09-13 00:02:26.915 [INFO][5231] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.133/26] handle="k8s-pod-network.dfbaf709bd2a86801e45083e16e9840b19339e55555063e1119bbcc555d8e333" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:26.953651 containerd[1714]: 2025-09-13 00:02:26.915 [INFO][5231] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:26.953651 containerd[1714]: 2025-09-13 00:02:26.915 [INFO][5231] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.38.133/26] IPv6=[] ContainerID="dfbaf709bd2a86801e45083e16e9840b19339e55555063e1119bbcc555d8e333" HandleID="k8s-pod-network.dfbaf709bd2a86801e45083e16e9840b19339e55555063e1119bbcc555d8e333" Workload="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wfk44-eth0" Sep 13 00:02:26.954817 containerd[1714]: 2025-09-13 00:02:26.917 [INFO][5181] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dfbaf709bd2a86801e45083e16e9840b19339e55555063e1119bbcc555d8e333" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wfk44" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wfk44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wfk44-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b2871a94-de4f-43dc-9c64-d0d7c69fe615", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 1, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"", Pod:"coredns-7c65d6cfc9-wfk44", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic15881fce0c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:26.954817 containerd[1714]: 2025-09-13 00:02:26.917 [INFO][5181] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.133/32] ContainerID="dfbaf709bd2a86801e45083e16e9840b19339e55555063e1119bbcc555d8e333" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wfk44" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wfk44-eth0" Sep 13 00:02:26.954817 containerd[1714]: 2025-09-13 00:02:26.917 [INFO][5181] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic15881fce0c ContainerID="dfbaf709bd2a86801e45083e16e9840b19339e55555063e1119bbcc555d8e333" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wfk44" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wfk44-eth0" Sep 13 00:02:26.954817 containerd[1714]: 2025-09-13 00:02:26.929 [INFO][5181] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dfbaf709bd2a86801e45083e16e9840b19339e55555063e1119bbcc555d8e333" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wfk44" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wfk44-eth0" Sep 13 00:02:26.954817 containerd[1714]: 2025-09-13 00:02:26.929 [INFO][5181] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dfbaf709bd2a86801e45083e16e9840b19339e55555063e1119bbcc555d8e333" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wfk44" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wfk44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wfk44-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b2871a94-de4f-43dc-9c64-d0d7c69fe615", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 1, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"dfbaf709bd2a86801e45083e16e9840b19339e55555063e1119bbcc555d8e333", Pod:"coredns-7c65d6cfc9-wfk44", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic15881fce0c", MAC:"5a:a2:73:6a:76:dc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:26.954817 containerd[1714]: 2025-09-13 00:02:26.950 [INFO][5181] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dfbaf709bd2a86801e45083e16e9840b19339e55555063e1119bbcc555d8e333" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wfk44" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wfk44-eth0" Sep 13 00:02:26.990215 containerd[1714]: time="2025-09-13T00:02:26.989832564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:02:26.990215 containerd[1714]: time="2025-09-13T00:02:26.989901804Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:02:26.990215 containerd[1714]: time="2025-09-13T00:02:26.989913524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:26.990215 containerd[1714]: time="2025-09-13T00:02:26.990009163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:27.017261 systemd[1]: Started cri-containerd-dfbaf709bd2a86801e45083e16e9840b19339e55555063e1119bbcc555d8e333.scope - libcontainer container dfbaf709bd2a86801e45083e16e9840b19339e55555063e1119bbcc555d8e333. Sep 13 00:02:27.026732 systemd-networkd[1567]: calibb28e7e77a6: Link UP Sep 13 00:02:27.027778 systemd-networkd[1567]: calibb28e7e77a6: Gained carrier Sep 13 00:02:27.051438 containerd[1714]: 2025-09-13 00:02:26.766 [INFO][5197] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:02:27.051438 containerd[1714]: 2025-09-13 00:02:26.826 [INFO][5197] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--a13ccab244-k8s-csi--node--driver--xm8sj-eth0 csi-node-driver- calico-system a5fb5c00-ba33-4510-871e-6572e3bb79c8 985 0 2025-09-13 00:02:01 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.5-n-a13ccab244 csi-node-driver-xm8sj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibb28e7e77a6 [] [] }} ContainerID="0b0a843ae894241a6b7cbd8a2aa904f8210396c3a3cc5b725ab6204e4381f0c6" Namespace="calico-system" Pod="csi-node-driver-xm8sj" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-csi--node--driver--xm8sj-" Sep 13 00:02:27.051438 containerd[1714]: 2025-09-13 00:02:26.827 [INFO][5197] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b0a843ae894241a6b7cbd8a2aa904f8210396c3a3cc5b725ab6204e4381f0c6" Namespace="calico-system" Pod="csi-node-driver-xm8sj" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-csi--node--driver--xm8sj-eth0" Sep 13 00:02:27.051438 containerd[1714]: 2025-09-13 00:02:26.899 [INFO][5237] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b0a843ae894241a6b7cbd8a2aa904f8210396c3a3cc5b725ab6204e4381f0c6" HandleID="k8s-pod-network.0b0a843ae894241a6b7cbd8a2aa904f8210396c3a3cc5b725ab6204e4381f0c6" Workload="ci--4081.3.5--n--a13ccab244-k8s-csi--node--driver--xm8sj-eth0" Sep 13 00:02:27.051438 containerd[1714]: 2025-09-13 00:02:26.900 [INFO][5237] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0b0a843ae894241a6b7cbd8a2aa904f8210396c3a3cc5b725ab6204e4381f0c6" HandleID="k8s-pod-network.0b0a843ae894241a6b7cbd8a2aa904f8210396c3a3cc5b725ab6204e4381f0c6" Workload="ci--4081.3.5--n--a13ccab244-k8s-csi--node--driver--xm8sj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004da90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-a13ccab244", "pod":"csi-node-driver-xm8sj", "timestamp":"2025-09-13 00:02:26.899306634 +0000 UTC"}, Hostname:"ci-4081.3.5-n-a13ccab244", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:02:27.051438 containerd[1714]: 2025-09-13 00:02:26.900 [INFO][5237] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:27.051438 containerd[1714]: 2025-09-13 00:02:26.915 [INFO][5237] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:27.051438 containerd[1714]: 2025-09-13 00:02:26.915 [INFO][5237] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-a13ccab244' Sep 13 00:02:27.051438 containerd[1714]: 2025-09-13 00:02:26.957 [INFO][5237] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0b0a843ae894241a6b7cbd8a2aa904f8210396c3a3cc5b725ab6204e4381f0c6" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:27.051438 containerd[1714]: 2025-09-13 00:02:26.962 [INFO][5237] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:27.051438 containerd[1714]: 2025-09-13 00:02:26.969 [INFO][5237] ipam/ipam.go 511: Trying affinity for 192.168.38.128/26 host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:27.051438 containerd[1714]: 2025-09-13 00:02:26.972 [INFO][5237] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.128/26 host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:27.051438 containerd[1714]: 2025-09-13 00:02:26.975 [INFO][5237] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.128/26 host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:27.051438 containerd[1714]: 2025-09-13 00:02:26.975 [INFO][5237] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.38.128/26 handle="k8s-pod-network.0b0a843ae894241a6b7cbd8a2aa904f8210396c3a3cc5b725ab6204e4381f0c6" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:27.051438 containerd[1714]: 2025-09-13 00:02:26.977 [INFO][5237] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0b0a843ae894241a6b7cbd8a2aa904f8210396c3a3cc5b725ab6204e4381f0c6 Sep 13 00:02:27.051438 containerd[1714]: 2025-09-13 00:02:26.987 [INFO][5237] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.38.128/26 handle="k8s-pod-network.0b0a843ae894241a6b7cbd8a2aa904f8210396c3a3cc5b725ab6204e4381f0c6" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:27.051438 containerd[1714]: 2025-09-13 00:02:27.015 [INFO][5237] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.38.134/26] block=192.168.38.128/26 handle="k8s-pod-network.0b0a843ae894241a6b7cbd8a2aa904f8210396c3a3cc5b725ab6204e4381f0c6" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:27.051438 containerd[1714]: 2025-09-13 00:02:27.015 [INFO][5237] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.134/26] handle="k8s-pod-network.0b0a843ae894241a6b7cbd8a2aa904f8210396c3a3cc5b725ab6204e4381f0c6" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:27.051438 containerd[1714]: 2025-09-13 00:02:27.015 [INFO][5237] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:27.051438 containerd[1714]: 2025-09-13 00:02:27.015 [INFO][5237] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.38.134/26] IPv6=[] ContainerID="0b0a843ae894241a6b7cbd8a2aa904f8210396c3a3cc5b725ab6204e4381f0c6" HandleID="k8s-pod-network.0b0a843ae894241a6b7cbd8a2aa904f8210396c3a3cc5b725ab6204e4381f0c6" Workload="ci--4081.3.5--n--a13ccab244-k8s-csi--node--driver--xm8sj-eth0" Sep 13 00:02:27.051976 containerd[1714]: 2025-09-13 00:02:27.021 [INFO][5197] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b0a843ae894241a6b7cbd8a2aa904f8210396c3a3cc5b725ab6204e4381f0c6" Namespace="calico-system" Pod="csi-node-driver-xm8sj" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-csi--node--driver--xm8sj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-csi--node--driver--xm8sj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a5fb5c00-ba33-4510-871e-6572e3bb79c8", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"", Pod:"csi-node-driver-xm8sj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.38.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibb28e7e77a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:27.051976 containerd[1714]: 2025-09-13 00:02:27.022 [INFO][5197] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.134/32] ContainerID="0b0a843ae894241a6b7cbd8a2aa904f8210396c3a3cc5b725ab6204e4381f0c6" Namespace="calico-system" Pod="csi-node-driver-xm8sj" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-csi--node--driver--xm8sj-eth0" Sep 13 00:02:27.051976 containerd[1714]: 2025-09-13 00:02:27.022 [INFO][5197] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb28e7e77a6 ContainerID="0b0a843ae894241a6b7cbd8a2aa904f8210396c3a3cc5b725ab6204e4381f0c6" Namespace="calico-system" Pod="csi-node-driver-xm8sj" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-csi--node--driver--xm8sj-eth0" Sep 13 00:02:27.051976 containerd[1714]: 2025-09-13 00:02:27.028 [INFO][5197] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b0a843ae894241a6b7cbd8a2aa904f8210396c3a3cc5b725ab6204e4381f0c6" Namespace="calico-system" Pod="csi-node-driver-xm8sj" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-csi--node--driver--xm8sj-eth0" Sep 13 00:02:27.051976 containerd[1714]: 2025-09-13 00:02:27.031 [INFO][5197] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b0a843ae894241a6b7cbd8a2aa904f8210396c3a3cc5b725ab6204e4381f0c6" Namespace="calico-system" Pod="csi-node-driver-xm8sj" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-csi--node--driver--xm8sj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-csi--node--driver--xm8sj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a5fb5c00-ba33-4510-871e-6572e3bb79c8", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"0b0a843ae894241a6b7cbd8a2aa904f8210396c3a3cc5b725ab6204e4381f0c6", Pod:"csi-node-driver-xm8sj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.38.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibb28e7e77a6", MAC:"fe:4b:a9:70:3b:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:27.051976 containerd[1714]: 2025-09-13 00:02:27.046 [INFO][5197] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b0a843ae894241a6b7cbd8a2aa904f8210396c3a3cc5b725ab6204e4381f0c6" Namespace="calico-system" Pod="csi-node-driver-xm8sj" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-csi--node--driver--xm8sj-eth0" Sep 13 00:02:27.084688 containerd[1714]: time="2025-09-13T00:02:27.084630606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wfk44,Uid:b2871a94-de4f-43dc-9c64-d0d7c69fe615,Namespace:kube-system,Attempt:1,} returns sandbox id \"dfbaf709bd2a86801e45083e16e9840b19339e55555063e1119bbcc555d8e333\"" Sep 13 00:02:27.088100 containerd[1714]: time="2025-09-13T00:02:27.086693923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:02:27.088240 containerd[1714]: time="2025-09-13T00:02:27.087704041Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:02:27.088240 containerd[1714]: time="2025-09-13T00:02:27.087721041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:27.088240 containerd[1714]: time="2025-09-13T00:02:27.087905401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:27.092542 containerd[1714]: time="2025-09-13T00:02:27.091661795Z" level=info msg="CreateContainer within sandbox \"dfbaf709bd2a86801e45083e16e9840b19339e55555063e1119bbcc555d8e333\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:02:27.112198 systemd[1]: Started cri-containerd-0b0a843ae894241a6b7cbd8a2aa904f8210396c3a3cc5b725ab6204e4381f0c6.scope - libcontainer container 0b0a843ae894241a6b7cbd8a2aa904f8210396c3a3cc5b725ab6204e4381f0c6. Sep 13 00:02:27.141218 containerd[1714]: time="2025-09-13T00:02:27.141153513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xm8sj,Uid:a5fb5c00-ba33-4510-871e-6572e3bb79c8,Namespace:calico-system,Attempt:1,} returns sandbox id \"0b0a843ae894241a6b7cbd8a2aa904f8210396c3a3cc5b725ab6204e4381f0c6\"" Sep 13 00:02:27.153863 containerd[1714]: time="2025-09-13T00:02:27.153754452Z" level=info msg="CreateContainer within sandbox \"dfbaf709bd2a86801e45083e16e9840b19339e55555063e1119bbcc555d8e333\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d80130af49ed4536012eaaa6227f6bb76d8f9c77adc3b589a73ad74fbbbf65f9\"" Sep 13 00:02:27.155091 containerd[1714]: time="2025-09-13T00:02:27.154351771Z" level=info msg="StartContainer for \"d80130af49ed4536012eaaa6227f6bb76d8f9c77adc3b589a73ad74fbbbf65f9\"" Sep 13 00:02:27.195562 systemd[1]: Started cri-containerd-d80130af49ed4536012eaaa6227f6bb76d8f9c77adc3b589a73ad74fbbbf65f9.scope - libcontainer container d80130af49ed4536012eaaa6227f6bb76d8f9c77adc3b589a73ad74fbbbf65f9. Sep 13 00:02:27.234340 containerd[1714]: time="2025-09-13T00:02:27.234231958Z" level=info msg="StartContainer for \"d80130af49ed4536012eaaa6227f6bb76d8f9c77adc3b589a73ad74fbbbf65f9\" returns successfully" Sep 13 00:02:27.331152 containerd[1714]: time="2025-09-13T00:02:27.330066959Z" level=info msg="StopPodSandbox for \"a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe\"" Sep 13 00:02:27.332978 containerd[1714]: time="2025-09-13T00:02:27.332172756Z" level=info msg="StopPodSandbox for \"c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8\"" Sep 13 00:02:27.540138 containerd[1714]: 2025-09-13 00:02:27.478 [INFO][5418] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" Sep 13 00:02:27.540138 containerd[1714]: 2025-09-13 00:02:27.478 [INFO][5418] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" iface="eth0" netns="/var/run/netns/cni-28c33d3b-6637-3a73-bdc0-d6dcdde1bb38" Sep 13 00:02:27.540138 containerd[1714]: 2025-09-13 00:02:27.478 [INFO][5418] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" iface="eth0" netns="/var/run/netns/cni-28c33d3b-6637-3a73-bdc0-d6dcdde1bb38" Sep 13 00:02:27.540138 containerd[1714]: 2025-09-13 00:02:27.478 [INFO][5418] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" iface="eth0" netns="/var/run/netns/cni-28c33d3b-6637-3a73-bdc0-d6dcdde1bb38" Sep 13 00:02:27.540138 containerd[1714]: 2025-09-13 00:02:27.478 [INFO][5418] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" Sep 13 00:02:27.540138 containerd[1714]: 2025-09-13 00:02:27.478 [INFO][5418] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" Sep 13 00:02:27.540138 containerd[1714]: 2025-09-13 00:02:27.514 [INFO][5441] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" HandleID="k8s-pod-network.c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--kube--controllers--8447f55f8b--9w8q9-eth0" Sep 13 00:02:27.540138 containerd[1714]: 2025-09-13 00:02:27.515 [INFO][5441] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:27.540138 containerd[1714]: 2025-09-13 00:02:27.515 [INFO][5441] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:27.540138 containerd[1714]: 2025-09-13 00:02:27.528 [WARNING][5441] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" HandleID="k8s-pod-network.c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--kube--controllers--8447f55f8b--9w8q9-eth0" Sep 13 00:02:27.540138 containerd[1714]: 2025-09-13 00:02:27.529 [INFO][5441] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" HandleID="k8s-pod-network.c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--kube--controllers--8447f55f8b--9w8q9-eth0" Sep 13 00:02:27.540138 containerd[1714]: 2025-09-13 00:02:27.531 [INFO][5441] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:27.540138 containerd[1714]: 2025-09-13 00:02:27.537 [INFO][5418] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" Sep 13 00:02:27.544117 systemd[1]: run-netns-cni\x2d28c33d3b\x2d6637\x2d3a73\x2dbdc0\x2dd6dcdde1bb38.mount: Deactivated successfully. Sep 13 00:02:27.552629 containerd[1714]: time="2025-09-13T00:02:27.552581870Z" level=info msg="TearDown network for sandbox \"c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8\" successfully" Sep 13 00:02:27.552629 containerd[1714]: time="2025-09-13T00:02:27.552631110Z" level=info msg="StopPodSandbox for \"c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8\" returns successfully" Sep 13 00:02:27.554193 containerd[1714]: time="2025-09-13T00:02:27.553429589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8447f55f8b-9w8q9,Uid:ab136b13-4df9-4d75-8b01-4675fa58dba1,Namespace:calico-system,Attempt:1,}" Sep 13 00:02:27.580325 containerd[1714]: 2025-09-13 00:02:27.477 [INFO][5421] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" Sep 13 00:02:27.580325 containerd[1714]: 2025-09-13 00:02:27.477 [INFO][5421] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" iface="eth0" netns="/var/run/netns/cni-3e355f78-0aec-2e31-11d6-2cc73f4e2385" Sep 13 00:02:27.580325 containerd[1714]: 2025-09-13 00:02:27.478 [INFO][5421] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" iface="eth0" netns="/var/run/netns/cni-3e355f78-0aec-2e31-11d6-2cc73f4e2385" Sep 13 00:02:27.580325 containerd[1714]: 2025-09-13 00:02:27.478 [INFO][5421] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" iface="eth0" netns="/var/run/netns/cni-3e355f78-0aec-2e31-11d6-2cc73f4e2385" Sep 13 00:02:27.580325 containerd[1714]: 2025-09-13 00:02:27.478 [INFO][5421] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" Sep 13 00:02:27.580325 containerd[1714]: 2025-09-13 00:02:27.478 [INFO][5421] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" Sep 13 00:02:27.580325 containerd[1714]: 2025-09-13 00:02:27.523 [INFO][5440] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" HandleID="k8s-pod-network.a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" Workload="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wm5tv-eth0" Sep 13 00:02:27.580325 containerd[1714]: 2025-09-13 00:02:27.524 [INFO][5440] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:27.580325 containerd[1714]: 2025-09-13 00:02:27.531 [INFO][5440] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:27.580325 containerd[1714]: 2025-09-13 00:02:27.566 [WARNING][5440] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" HandleID="k8s-pod-network.a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" Workload="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wm5tv-eth0" Sep 13 00:02:27.580325 containerd[1714]: 2025-09-13 00:02:27.567 [INFO][5440] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" HandleID="k8s-pod-network.a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" Workload="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wm5tv-eth0" Sep 13 00:02:27.580325 containerd[1714]: 2025-09-13 00:02:27.570 [INFO][5440] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:27.580325 containerd[1714]: 2025-09-13 00:02:27.573 [INFO][5421] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" Sep 13 00:02:27.584988 systemd[1]: run-netns-cni\x2d3e355f78\x2d0aec\x2d2e31\x2d11d6\x2d2cc73f4e2385.mount: Deactivated successfully. Sep 13 00:02:27.589927 containerd[1714]: time="2025-09-13T00:02:27.589582289Z" level=info msg="TearDown network for sandbox \"a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe\" successfully" Sep 13 00:02:27.589927 containerd[1714]: time="2025-09-13T00:02:27.589629969Z" level=info msg="StopPodSandbox for \"a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe\" returns successfully" Sep 13 00:02:27.590714 containerd[1714]: time="2025-09-13T00:02:27.590505407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wm5tv,Uid:0aaf6e3b-2112-4be4-8bd2-d8200dc2d876,Namespace:kube-system,Attempt:1,}" Sep 13 00:02:27.628699 kubelet[3139]: I0913 00:02:27.628634 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-wfk44" podStartSLOduration=41.628616024 podStartE2EDuration="41.628616024s" podCreationTimestamp="2025-09-13 00:01:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:02:27.627539746 +0000 UTC m=+48.433221049" watchObservedRunningTime="2025-09-13 00:02:27.628616024 +0000 UTC m=+48.434297327" Sep 13 00:02:27.628886 kubelet[3139]: I0913 00:02:27.628769 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5dd67bc454-rxrtp" podStartSLOduration=29.729476094 podStartE2EDuration="31.628764584s" podCreationTimestamp="2025-09-13 00:01:56 +0000 UTC" firstStartedPulling="2025-09-13 00:02:24.792371368 +0000 UTC m=+45.598052671" lastFinishedPulling="2025-09-13 00:02:26.691659858 +0000 UTC m=+47.497341161" observedRunningTime="2025-09-13 00:02:27.605584342 +0000 UTC m=+48.411265645" watchObservedRunningTime="2025-09-13 00:02:27.628764584 +0000 UTC m=+48.434445887" Sep 13 00:02:27.663253 kubelet[3139]: I0913 00:02:27.663189 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77b8b896d4-dwdsk" podStartSLOduration=26.315145329 podStartE2EDuration="29.663169807s" podCreationTimestamp="2025-09-13 00:01:58 +0000 UTC" firstStartedPulling="2025-09-13 00:02:22.913127054 +0000 UTC m=+43.718808317" lastFinishedPulling="2025-09-13 00:02:26.261151492 +0000 UTC m=+47.066832795" observedRunningTime="2025-09-13 00:02:27.660740891 +0000 UTC m=+48.466422194" watchObservedRunningTime="2025-09-13 00:02:27.663169807 +0000 UTC m=+48.468851110" Sep 13 00:02:28.073187 systemd-networkd[1567]: calic15881fce0c: Gained IPv6LL Sep 13 00:02:28.236048 kernel: bpftool[5524]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 13 00:02:28.331411 containerd[1714]: time="2025-09-13T00:02:28.331298299Z" level=info msg="StopPodSandbox for \"807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60\"" Sep 13 00:02:28.417628 systemd-networkd[1567]: calib66f97ec93d: Link UP Sep 13 00:02:28.420007 systemd-networkd[1567]: calib66f97ec93d: Gained carrier Sep 13 00:02:28.451197 containerd[1714]: 2025-09-13 00:02:28.215 [INFO][5495] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--a13ccab244-k8s-calico--kube--controllers--8447f55f8b--9w8q9-eth0 calico-kube-controllers-8447f55f8b- calico-system ab136b13-4df9-4d75-8b01-4675fa58dba1 1013 0 2025-09-13 00:02:01 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8447f55f8b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.5-n-a13ccab244 calico-kube-controllers-8447f55f8b-9w8q9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib66f97ec93d [] [] }} ContainerID="358f1899d134e484402438702720092ec59e5e915182df9bc1c74dd23c778e46" Namespace="calico-system" Pod="calico-kube-controllers-8447f55f8b-9w8q9" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--kube--controllers--8447f55f8b--9w8q9-" Sep 13 00:02:28.451197 containerd[1714]: 2025-09-13 00:02:28.215 [INFO][5495] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="358f1899d134e484402438702720092ec59e5e915182df9bc1c74dd23c778e46" Namespace="calico-system" Pod="calico-kube-controllers-8447f55f8b-9w8q9" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--kube--controllers--8447f55f8b--9w8q9-eth0" Sep 13 00:02:28.451197 containerd[1714]: 2025-09-13 00:02:28.308 [INFO][5525] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="358f1899d134e484402438702720092ec59e5e915182df9bc1c74dd23c778e46" HandleID="k8s-pod-network.358f1899d134e484402438702720092ec59e5e915182df9bc1c74dd23c778e46" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--kube--controllers--8447f55f8b--9w8q9-eth0" Sep 13 00:02:28.451197 containerd[1714]: 2025-09-13 00:02:28.308 [INFO][5525] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="358f1899d134e484402438702720092ec59e5e915182df9bc1c74dd23c778e46" HandleID="k8s-pod-network.358f1899d134e484402438702720092ec59e5e915182df9bc1c74dd23c778e46" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--kube--controllers--8447f55f8b--9w8q9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b610), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-a13ccab244", "pod":"calico-kube-controllers-8447f55f8b-9w8q9", "timestamp":"2025-09-13 00:02:28.308788456 +0000 UTC"}, Hostname:"ci-4081.3.5-n-a13ccab244", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:02:28.451197 containerd[1714]: 2025-09-13 00:02:28.308 [INFO][5525] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:28.451197 containerd[1714]: 2025-09-13 00:02:28.309 [INFO][5525] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:28.451197 containerd[1714]: 2025-09-13 00:02:28.309 [INFO][5525] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-a13ccab244' Sep 13 00:02:28.451197 containerd[1714]: 2025-09-13 00:02:28.322 [INFO][5525] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.358f1899d134e484402438702720092ec59e5e915182df9bc1c74dd23c778e46" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:28.451197 containerd[1714]: 2025-09-13 00:02:28.342 [INFO][5525] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:28.451197 containerd[1714]: 2025-09-13 00:02:28.362 [INFO][5525] ipam/ipam.go 511: Trying affinity for 192.168.38.128/26 host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:28.451197 containerd[1714]: 2025-09-13 00:02:28.367 [INFO][5525] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.128/26 host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:28.451197 containerd[1714]: 2025-09-13 00:02:28.373 [INFO][5525] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.128/26 host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:28.451197 containerd[1714]: 2025-09-13 00:02:28.373 [INFO][5525] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.38.128/26 handle="k8s-pod-network.358f1899d134e484402438702720092ec59e5e915182df9bc1c74dd23c778e46" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:28.451197 containerd[1714]: 2025-09-13 00:02:28.376 [INFO][5525] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.358f1899d134e484402438702720092ec59e5e915182df9bc1c74dd23c778e46 Sep 13 00:02:28.451197 containerd[1714]: 2025-09-13 00:02:28.388 [INFO][5525] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.38.128/26 handle="k8s-pod-network.358f1899d134e484402438702720092ec59e5e915182df9bc1c74dd23c778e46" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:28.451197 containerd[1714]: 2025-09-13 00:02:28.402 [INFO][5525] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.38.135/26] block=192.168.38.128/26 handle="k8s-pod-network.358f1899d134e484402438702720092ec59e5e915182df9bc1c74dd23c778e46" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:28.451197 containerd[1714]: 2025-09-13 00:02:28.403 [INFO][5525] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.135/26] handle="k8s-pod-network.358f1899d134e484402438702720092ec59e5e915182df9bc1c74dd23c778e46" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:28.451197 containerd[1714]: 2025-09-13 00:02:28.403 [INFO][5525] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:28.451197 containerd[1714]: 2025-09-13 00:02:28.403 [INFO][5525] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.38.135/26] IPv6=[] ContainerID="358f1899d134e484402438702720092ec59e5e915182df9bc1c74dd23c778e46" HandleID="k8s-pod-network.358f1899d134e484402438702720092ec59e5e915182df9bc1c74dd23c778e46" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--kube--controllers--8447f55f8b--9w8q9-eth0" Sep 13 00:02:28.452140 containerd[1714]: 2025-09-13 00:02:28.407 [INFO][5495] cni-plugin/k8s.go 418: Populated endpoint ContainerID="358f1899d134e484402438702720092ec59e5e915182df9bc1c74dd23c778e46" Namespace="calico-system" Pod="calico-kube-controllers-8447f55f8b-9w8q9" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--kube--controllers--8447f55f8b--9w8q9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-calico--kube--controllers--8447f55f8b--9w8q9-eth0", GenerateName:"calico-kube-controllers-8447f55f8b-", Namespace:"calico-system", SelfLink:"", UID:"ab136b13-4df9-4d75-8b01-4675fa58dba1", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8447f55f8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"", Pod:"calico-kube-controllers-8447f55f8b-9w8q9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.38.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib66f97ec93d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:28.452140 containerd[1714]: 2025-09-13 00:02:28.407 [INFO][5495] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.135/32] ContainerID="358f1899d134e484402438702720092ec59e5e915182df9bc1c74dd23c778e46" Namespace="calico-system" Pod="calico-kube-controllers-8447f55f8b-9w8q9" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--kube--controllers--8447f55f8b--9w8q9-eth0" Sep 13 00:02:28.452140 containerd[1714]: 2025-09-13 00:02:28.407 [INFO][5495] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib66f97ec93d ContainerID="358f1899d134e484402438702720092ec59e5e915182df9bc1c74dd23c778e46" Namespace="calico-system" Pod="calico-kube-controllers-8447f55f8b-9w8q9" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--kube--controllers--8447f55f8b--9w8q9-eth0" Sep 13 00:02:28.452140 containerd[1714]: 2025-09-13 00:02:28.422 [INFO][5495] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="358f1899d134e484402438702720092ec59e5e915182df9bc1c74dd23c778e46" Namespace="calico-system" Pod="calico-kube-controllers-8447f55f8b-9w8q9" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--kube--controllers--8447f55f8b--9w8q9-eth0" Sep 13 00:02:28.452140 containerd[1714]: 2025-09-13 00:02:28.423 [INFO][5495] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="358f1899d134e484402438702720092ec59e5e915182df9bc1c74dd23c778e46" Namespace="calico-system" Pod="calico-kube-controllers-8447f55f8b-9w8q9" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--kube--controllers--8447f55f8b--9w8q9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-calico--kube--controllers--8447f55f8b--9w8q9-eth0", GenerateName:"calico-kube-controllers-8447f55f8b-", Namespace:"calico-system", SelfLink:"", UID:"ab136b13-4df9-4d75-8b01-4675fa58dba1", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8447f55f8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"358f1899d134e484402438702720092ec59e5e915182df9bc1c74dd23c778e46", Pod:"calico-kube-controllers-8447f55f8b-9w8q9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.38.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib66f97ec93d", MAC:"62:69:c9:6a:31:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:28.452140 containerd[1714]: 2025-09-13 00:02:28.444 [INFO][5495] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="358f1899d134e484402438702720092ec59e5e915182df9bc1c74dd23c778e46" Namespace="calico-system" Pod="calico-kube-controllers-8447f55f8b-9w8q9" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--kube--controllers--8447f55f8b--9w8q9-eth0" Sep 13 00:02:28.520419 systemd-networkd[1567]: cali498eccb5e3c: Link UP Sep 13 00:02:28.522142 systemd-networkd[1567]: cali498eccb5e3c: Gained carrier Sep 13 00:02:28.557940 containerd[1714]: 2025-09-13 00:02:28.435 [INFO][5547] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" Sep 13 00:02:28.557940 containerd[1714]: 2025-09-13 00:02:28.435 [INFO][5547] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" iface="eth0" netns="/var/run/netns/cni-3855648b-4bcb-d8b4-0a6d-a8fd9e36b757" Sep 13 00:02:28.557940 containerd[1714]: 2025-09-13 00:02:28.436 [INFO][5547] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" iface="eth0" netns="/var/run/netns/cni-3855648b-4bcb-d8b4-0a6d-a8fd9e36b757" Sep 13 00:02:28.557940 containerd[1714]: 2025-09-13 00:02:28.436 [INFO][5547] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" iface="eth0" netns="/var/run/netns/cni-3855648b-4bcb-d8b4-0a6d-a8fd9e36b757" Sep 13 00:02:28.557940 containerd[1714]: 2025-09-13 00:02:28.436 [INFO][5547] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" Sep 13 00:02:28.557940 containerd[1714]: 2025-09-13 00:02:28.436 [INFO][5547] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" Sep 13 00:02:28.557940 containerd[1714]: 2025-09-13 00:02:28.475 [INFO][5561] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" HandleID="k8s-pod-network.807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" Sep 13 00:02:28.557940 containerd[1714]: 2025-09-13 00:02:28.476 [INFO][5561] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:28.557940 containerd[1714]: 2025-09-13 00:02:28.509 [INFO][5561] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:28.557940 containerd[1714]: 2025-09-13 00:02:28.532 [WARNING][5561] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" HandleID="k8s-pod-network.807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" Sep 13 00:02:28.557940 containerd[1714]: 2025-09-13 00:02:28.533 [INFO][5561] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" HandleID="k8s-pod-network.807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" Sep 13 00:02:28.557940 containerd[1714]: 2025-09-13 00:02:28.539 [INFO][5561] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:28.557940 containerd[1714]: 2025-09-13 00:02:28.550 [INFO][5547] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" Sep 13 00:02:28.560105 containerd[1714]: 2025-09-13 00:02:28.295 [INFO][5505] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wm5tv-eth0 coredns-7c65d6cfc9- kube-system 0aaf6e3b-2112-4be4-8bd2-d8200dc2d876 1012 0 2025-09-13 00:01:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-n-a13ccab244 coredns-7c65d6cfc9-wm5tv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali498eccb5e3c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ae22384017e810641a7002d521cf77eb5399f5435979b3f4c4da77431815bcd8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wm5tv" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wm5tv-" Sep 13 00:02:28.560105 containerd[1714]: 2025-09-13 00:02:28.296 [INFO][5505] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ae22384017e810641a7002d521cf77eb5399f5435979b3f4c4da77431815bcd8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wm5tv" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wm5tv-eth0" Sep 13 00:02:28.560105 containerd[1714]: 2025-09-13 00:02:28.357 [INFO][5533] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae22384017e810641a7002d521cf77eb5399f5435979b3f4c4da77431815bcd8" HandleID="k8s-pod-network.ae22384017e810641a7002d521cf77eb5399f5435979b3f4c4da77431815bcd8" Workload="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wm5tv-eth0" Sep 13 00:02:28.560105 containerd[1714]: 2025-09-13 00:02:28.357 [INFO][5533] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ae22384017e810641a7002d521cf77eb5399f5435979b3f4c4da77431815bcd8" HandleID="k8s-pod-network.ae22384017e810641a7002d521cf77eb5399f5435979b3f4c4da77431815bcd8" Workload="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wm5tv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40000e0a80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-n-a13ccab244", "pod":"coredns-7c65d6cfc9-wm5tv", "timestamp":"2025-09-13 00:02:28.357830215 +0000 UTC"}, Hostname:"ci-4081.3.5-n-a13ccab244", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:02:28.560105 containerd[1714]: 2025-09-13 00:02:28.360 [INFO][5533] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:28.560105 containerd[1714]: 2025-09-13 00:02:28.403 [INFO][5533] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:28.560105 containerd[1714]: 2025-09-13 00:02:28.403 [INFO][5533] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-a13ccab244' Sep 13 00:02:28.560105 containerd[1714]: 2025-09-13 00:02:28.438 [INFO][5533] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ae22384017e810641a7002d521cf77eb5399f5435979b3f4c4da77431815bcd8" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:28.560105 containerd[1714]: 2025-09-13 00:02:28.448 [INFO][5533] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:28.560105 containerd[1714]: 2025-09-13 00:02:28.464 [INFO][5533] ipam/ipam.go 511: Trying affinity for 192.168.38.128/26 host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:28.560105 containerd[1714]: 2025-09-13 00:02:28.470 [INFO][5533] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.128/26 host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:28.560105 containerd[1714]: 2025-09-13 00:02:28.477 [INFO][5533] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.128/26 host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:28.560105 containerd[1714]: 2025-09-13 00:02:28.477 [INFO][5533] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.38.128/26 handle="k8s-pod-network.ae22384017e810641a7002d521cf77eb5399f5435979b3f4c4da77431815bcd8" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:28.560105 containerd[1714]: 2025-09-13 00:02:28.484 [INFO][5533] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ae22384017e810641a7002d521cf77eb5399f5435979b3f4c4da77431815bcd8 Sep 13 00:02:28.560105 containerd[1714]: 2025-09-13 00:02:28.495 [INFO][5533] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.38.128/26 handle="k8s-pod-network.ae22384017e810641a7002d521cf77eb5399f5435979b3f4c4da77431815bcd8" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:28.560105 containerd[1714]: 2025-09-13 00:02:28.509 [INFO][5533] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.38.136/26] block=192.168.38.128/26 handle="k8s-pod-network.ae22384017e810641a7002d521cf77eb5399f5435979b3f4c4da77431815bcd8" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:28.560105 containerd[1714]: 2025-09-13 00:02:28.509 [INFO][5533] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.136/26] handle="k8s-pod-network.ae22384017e810641a7002d521cf77eb5399f5435979b3f4c4da77431815bcd8" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:28.560105 containerd[1714]: 2025-09-13 00:02:28.509 [INFO][5533] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:28.560105 containerd[1714]: 2025-09-13 00:02:28.509 [INFO][5533] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.38.136/26] IPv6=[] ContainerID="ae22384017e810641a7002d521cf77eb5399f5435979b3f4c4da77431815bcd8" HandleID="k8s-pod-network.ae22384017e810641a7002d521cf77eb5399f5435979b3f4c4da77431815bcd8" Workload="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wm5tv-eth0" Sep 13 00:02:28.562409 containerd[1714]: 2025-09-13 00:02:28.516 [INFO][5505] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ae22384017e810641a7002d521cf77eb5399f5435979b3f4c4da77431815bcd8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wm5tv" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wm5tv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wm5tv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"0aaf6e3b-2112-4be4-8bd2-d8200dc2d876", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 1, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"", Pod:"coredns-7c65d6cfc9-wm5tv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali498eccb5e3c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:28.562409 containerd[1714]: 2025-09-13 00:02:28.516 [INFO][5505] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.136/32] ContainerID="ae22384017e810641a7002d521cf77eb5399f5435979b3f4c4da77431815bcd8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wm5tv" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wm5tv-eth0" Sep 13 00:02:28.562409 containerd[1714]: 2025-09-13 00:02:28.516 [INFO][5505] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali498eccb5e3c ContainerID="ae22384017e810641a7002d521cf77eb5399f5435979b3f4c4da77431815bcd8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wm5tv" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wm5tv-eth0" Sep 13 00:02:28.562409 containerd[1714]: 2025-09-13 00:02:28.524 [INFO][5505] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ae22384017e810641a7002d521cf77eb5399f5435979b3f4c4da77431815bcd8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wm5tv" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wm5tv-eth0" Sep 13 00:02:28.562409 containerd[1714]: 2025-09-13 00:02:28.534 [INFO][5505] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ae22384017e810641a7002d521cf77eb5399f5435979b3f4c4da77431815bcd8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wm5tv" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wm5tv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wm5tv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"0aaf6e3b-2112-4be4-8bd2-d8200dc2d876", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 1, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"ae22384017e810641a7002d521cf77eb5399f5435979b3f4c4da77431815bcd8", Pod:"coredns-7c65d6cfc9-wm5tv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali498eccb5e3c", MAC:"fa:67:f0:5f:13:35", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:28.562409 containerd[1714]: 2025-09-13 00:02:28.554 [INFO][5505] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ae22384017e810641a7002d521cf77eb5399f5435979b3f4c4da77431815bcd8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wm5tv" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wm5tv-eth0" Sep 13 00:02:28.565326 systemd[1]: run-netns-cni\x2d3855648b\x2d4bcb\x2dd8b4\x2d0a6d\x2da8fd9e36b757.mount: Deactivated successfully. Sep 13 00:02:28.567246 containerd[1714]: time="2025-09-13T00:02:28.565275311Z" level=info msg="TearDown network for sandbox \"807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60\" successfully" Sep 13 00:02:28.567246 containerd[1714]: time="2025-09-13T00:02:28.566971468Z" level=info msg="StopPodSandbox for \"807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60\" returns successfully" Sep 13 00:02:28.567246 containerd[1714]: time="2025-09-13T00:02:28.567139548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:02:28.567246 containerd[1714]: time="2025-09-13T00:02:28.567193548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:02:28.573458 containerd[1714]: time="2025-09-13T00:02:28.572928018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd67bc454-q6cd7,Uid:7951ccfd-1c72-4867-a215-dc5163f06c2d,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:02:28.573458 containerd[1714]: time="2025-09-13T00:02:28.567205268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:28.573458 containerd[1714]: time="2025-09-13T00:02:28.567280827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:28.613275 systemd[1]: Started cri-containerd-358f1899d134e484402438702720092ec59e5e915182df9bc1c74dd23c778e46.scope - libcontainer container 358f1899d134e484402438702720092ec59e5e915182df9bc1c74dd23c778e46. Sep 13 00:02:28.651954 kubelet[3139]: I0913 00:02:28.650778 3139 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:02:28.651954 kubelet[3139]: I0913 00:02:28.650953 3139 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:02:28.657792 containerd[1714]: time="2025-09-13T00:02:28.657301758Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:02:28.657792 containerd[1714]: time="2025-09-13T00:02:28.657367078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:02:28.657792 containerd[1714]: time="2025-09-13T00:02:28.657378038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:28.657792 containerd[1714]: time="2025-09-13T00:02:28.657460358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:28.713249 systemd[1]: Started cri-containerd-ae22384017e810641a7002d521cf77eb5399f5435979b3f4c4da77431815bcd8.scope - libcontainer container ae22384017e810641a7002d521cf77eb5399f5435979b3f4c4da77431815bcd8. Sep 13 00:02:28.825579 containerd[1714]: time="2025-09-13T00:02:28.825277360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wm5tv,Uid:0aaf6e3b-2112-4be4-8bd2-d8200dc2d876,Namespace:kube-system,Attempt:1,} returns sandbox id \"ae22384017e810641a7002d521cf77eb5399f5435979b3f4c4da77431815bcd8\"" Sep 13 00:02:28.830631 containerd[1714]: time="2025-09-13T00:02:28.830369631Z" level=info msg="CreateContainer within sandbox \"ae22384017e810641a7002d521cf77eb5399f5435979b3f4c4da77431815bcd8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:02:28.897272 containerd[1714]: time="2025-09-13T00:02:28.897143240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8447f55f8b-9w8q9,Uid:ab136b13-4df9-4d75-8b01-4675fa58dba1,Namespace:calico-system,Attempt:1,} returns sandbox id \"358f1899d134e484402438702720092ec59e5e915182df9bc1c74dd23c778e46\"" Sep 13 00:02:28.905235 systemd-networkd[1567]: calibb28e7e77a6: Gained IPv6LL Sep 13 00:02:29.008218 systemd-networkd[1567]: calia19cf7a4b77: Link UP Sep 13 00:02:29.008427 systemd-networkd[1567]: calia19cf7a4b77: Gained carrier Sep 13 00:02:29.033117 containerd[1714]: 2025-09-13 00:02:28.833 [INFO][5642] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0 calico-apiserver-5dd67bc454- calico-apiserver 7951ccfd-1c72-4867-a215-dc5163f06c2d 1032 0 2025-09-13 00:01:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5dd67bc454 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-n-a13ccab244 calico-apiserver-5dd67bc454-q6cd7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia19cf7a4b77 [] [] }} ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" Namespace="calico-apiserver" Pod="calico-apiserver-5dd67bc454-q6cd7" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-" Sep 13 00:02:29.033117 containerd[1714]: 2025-09-13 00:02:28.833 [INFO][5642] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" Namespace="calico-apiserver" Pod="calico-apiserver-5dd67bc454-q6cd7" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" Sep 13 00:02:29.033117 containerd[1714]: 2025-09-13 00:02:28.929 [INFO][5668] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" HandleID="k8s-pod-network.69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" Sep 13 00:02:29.033117 containerd[1714]: 2025-09-13 00:02:28.930 [INFO][5668] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" HandleID="k8s-pod-network.69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3a80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-n-a13ccab244", "pod":"calico-apiserver-5dd67bc454-q6cd7", "timestamp":"2025-09-13 00:02:28.929259227 +0000 UTC"}, Hostname:"ci-4081.3.5-n-a13ccab244", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:02:29.033117 containerd[1714]: 2025-09-13 00:02:28.931 [INFO][5668] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:29.033117 containerd[1714]: 2025-09-13 00:02:28.931 [INFO][5668] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:29.033117 containerd[1714]: 2025-09-13 00:02:28.931 [INFO][5668] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-a13ccab244' Sep 13 00:02:29.033117 containerd[1714]: 2025-09-13 00:02:28.941 [INFO][5668] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:29.033117 containerd[1714]: 2025-09-13 00:02:28.948 [INFO][5668] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:29.033117 containerd[1714]: 2025-09-13 00:02:28.954 [INFO][5668] ipam/ipam.go 511: Trying affinity for 192.168.38.128/26 host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:29.033117 containerd[1714]: 2025-09-13 00:02:28.957 [INFO][5668] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.128/26 host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:29.033117 containerd[1714]: 2025-09-13 00:02:28.961 [INFO][5668] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.128/26 host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:29.033117 containerd[1714]: 2025-09-13 00:02:28.961 [INFO][5668] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.38.128/26 handle="k8s-pod-network.69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:29.033117 containerd[1714]: 2025-09-13 00:02:28.966 [INFO][5668] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec Sep 13 00:02:29.033117 containerd[1714]: 2025-09-13 00:02:28.971 [INFO][5668] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.38.128/26 handle="k8s-pod-network.69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:29.033117 containerd[1714]: 2025-09-13 00:02:28.983 [INFO][5668] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.38.137/26] block=192.168.38.128/26 handle="k8s-pod-network.69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:29.033117 containerd[1714]: 2025-09-13 00:02:28.983 [INFO][5668] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.137/26] handle="k8s-pod-network.69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:29.033117 containerd[1714]: 2025-09-13 00:02:28.983 [INFO][5668] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:29.033117 containerd[1714]: 2025-09-13 00:02:28.983 [INFO][5668] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.38.137/26] IPv6=[] ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" HandleID="k8s-pod-network.69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" Sep 13 00:02:29.033646 containerd[1714]: 2025-09-13 00:02:28.987 [INFO][5642] cni-plugin/k8s.go 418: Populated endpoint ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" Namespace="calico-apiserver" Pod="calico-apiserver-5dd67bc454-q6cd7" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0", GenerateName:"calico-apiserver-5dd67bc454-", Namespace:"calico-apiserver", SelfLink:"", UID:"7951ccfd-1c72-4867-a215-dc5163f06c2d", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 1, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dd67bc454", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"", Pod:"calico-apiserver-5dd67bc454-q6cd7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia19cf7a4b77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:29.033646 containerd[1714]: 2025-09-13 00:02:28.987 [INFO][5642] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.137/32] ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" Namespace="calico-apiserver" Pod="calico-apiserver-5dd67bc454-q6cd7" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" Sep 13 00:02:29.033646 containerd[1714]: 2025-09-13 00:02:28.987 [INFO][5642] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia19cf7a4b77 ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" Namespace="calico-apiserver" Pod="calico-apiserver-5dd67bc454-q6cd7" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" Sep 13 00:02:29.033646 containerd[1714]: 2025-09-13 00:02:29.008 [INFO][5642] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" Namespace="calico-apiserver" Pod="calico-apiserver-5dd67bc454-q6cd7" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" Sep 13 00:02:29.033646 containerd[1714]: 2025-09-13 00:02:29.010 [INFO][5642] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" Namespace="calico-apiserver" Pod="calico-apiserver-5dd67bc454-q6cd7" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0", GenerateName:"calico-apiserver-5dd67bc454-", Namespace:"calico-apiserver", SelfLink:"", UID:"7951ccfd-1c72-4867-a215-dc5163f06c2d", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 1, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dd67bc454", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec", Pod:"calico-apiserver-5dd67bc454-q6cd7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia19cf7a4b77", MAC:"7e:d2:76:91:13:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:29.033646 containerd[1714]: 2025-09-13 00:02:29.027 [INFO][5642] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" Namespace="calico-apiserver" Pod="calico-apiserver-5dd67bc454-q6cd7" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" Sep 13 00:02:29.283037 containerd[1714]: time="2025-09-13T00:02:29.280670964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:02:29.283037 containerd[1714]: time="2025-09-13T00:02:29.280732284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:02:29.283037 containerd[1714]: time="2025-09-13T00:02:29.280747524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:29.283037 containerd[1714]: time="2025-09-13T00:02:29.280832764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:29.288126 containerd[1714]: time="2025-09-13T00:02:29.287478033Z" level=info msg="CreateContainer within sandbox \"ae22384017e810641a7002d521cf77eb5399f5435979b3f4c4da77431815bcd8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"449affd929f851b7e1cf030897484c09be22c4e0804fb12a2f2cf84d87f73371\"" Sep 13 00:02:29.289116 containerd[1714]: time="2025-09-13T00:02:29.288779591Z" level=info msg="StartContainer for \"449affd929f851b7e1cf030897484c09be22c4e0804fb12a2f2cf84d87f73371\"" Sep 13 00:02:29.310055 systemd-networkd[1567]: vxlan.calico: Link UP Sep 13 00:02:29.310063 systemd-networkd[1567]: vxlan.calico: Gained carrier Sep 13 00:02:29.321190 systemd[1]: Started cri-containerd-69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec.scope - libcontainer container 69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec. Sep 13 00:02:29.377222 systemd[1]: Started cri-containerd-449affd929f851b7e1cf030897484c09be22c4e0804fb12a2f2cf84d87f73371.scope - libcontainer container 449affd929f851b7e1cf030897484c09be22c4e0804fb12a2f2cf84d87f73371. Sep 13 00:02:29.452061 containerd[1714]: time="2025-09-13T00:02:29.452003760Z" level=info msg="StartContainer for \"449affd929f851b7e1cf030897484c09be22c4e0804fb12a2f2cf84d87f73371\" returns successfully" Sep 13 00:02:29.605098 containerd[1714]: time="2025-09-13T00:02:29.604402547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dd67bc454-q6cd7,Uid:7951ccfd-1c72-4867-a215-dc5163f06c2d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec\"" Sep 13 00:02:29.618057 containerd[1714]: time="2025-09-13T00:02:29.617258486Z" level=info msg="CreateContainer within sandbox \"69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:02:29.694746 kubelet[3139]: I0913 00:02:29.694382 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-wm5tv" podStartSLOduration=43.694363238 podStartE2EDuration="43.694363238s" podCreationTimestamp="2025-09-13 00:01:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:02:29.692678001 +0000 UTC m=+50.498359264" watchObservedRunningTime="2025-09-13 00:02:29.694363238 +0000 UTC m=+50.500044541" Sep 13 00:02:29.728080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount460186002.mount: Deactivated successfully. Sep 13 00:02:29.753040 containerd[1714]: time="2025-09-13T00:02:29.752496542Z" level=info msg="CreateContainer within sandbox \"69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5383e02acfc6f52796e7b15f86ef4ade2057956d834e2afddb43f2a018cb94a0\"" Sep 13 00:02:29.756457 containerd[1714]: time="2025-09-13T00:02:29.756127576Z" level=info msg="StartContainer for \"5383e02acfc6f52796e7b15f86ef4ade2057956d834e2afddb43f2a018cb94a0\"" Sep 13 00:02:29.824238 systemd[1]: Started cri-containerd-5383e02acfc6f52796e7b15f86ef4ade2057956d834e2afddb43f2a018cb94a0.scope - libcontainer container 5383e02acfc6f52796e7b15f86ef4ade2057956d834e2afddb43f2a018cb94a0. Sep 13 00:02:29.864227 systemd-networkd[1567]: calib66f97ec93d: Gained IPv6LL Sep 13 00:02:29.933035 containerd[1714]: time="2025-09-13T00:02:29.932871563Z" level=info msg="StartContainer for \"5383e02acfc6f52796e7b15f86ef4ade2057956d834e2afddb43f2a018cb94a0\" returns successfully" Sep 13 00:02:30.441310 systemd-networkd[1567]: cali498eccb5e3c: Gained IPv6LL Sep 13 00:02:30.565242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2765724785.mount: Deactivated successfully. Sep 13 00:02:30.632194 systemd-networkd[1567]: calia19cf7a4b77: Gained IPv6LL Sep 13 00:02:30.719152 kubelet[3139]: I0913 00:02:30.718911 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5dd67bc454-q6cd7" podStartSLOduration=34.718740379 podStartE2EDuration="34.718740379s" podCreationTimestamp="2025-09-13 00:01:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:02:30.718046221 +0000 UTC m=+51.523727524" watchObservedRunningTime="2025-09-13 00:02:30.718740379 +0000 UTC m=+51.524421642" Sep 13 00:02:30.824150 systemd-networkd[1567]: vxlan.calico: Gained IPv6LL Sep 13 00:02:31.148729 containerd[1714]: time="2025-09-13T00:02:31.147841948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:31.153076 containerd[1714]: time="2025-09-13T00:02:31.153038459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 13 00:02:31.157358 containerd[1714]: time="2025-09-13T00:02:31.157324172Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:31.164174 containerd[1714]: time="2025-09-13T00:02:31.164122041Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:31.165576 containerd[1714]: time="2025-09-13T00:02:31.164931279Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 4.472468022s" Sep 13 00:02:31.165576 containerd[1714]: time="2025-09-13T00:02:31.164966279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 13 00:02:31.168343 containerd[1714]: time="2025-09-13T00:02:31.168311634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:02:31.169698 containerd[1714]: time="2025-09-13T00:02:31.169657192Z" level=info msg="CreateContainer within sandbox \"45c1e6b471ae2216ed87136b7531057f2a88105a16f2764e27ae24711a07975a\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:02:31.208741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3498131525.mount: Deactivated successfully. Sep 13 00:02:31.224885 containerd[1714]: time="2025-09-13T00:02:31.224816220Z" level=info msg="CreateContainer within sandbox \"45c1e6b471ae2216ed87136b7531057f2a88105a16f2764e27ae24711a07975a\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"33bce7e2b7036929ffd5e28e5eb9932cbf460c777fb0ccea717f039b64271a8e\"" Sep 13 00:02:31.226210 containerd[1714]: time="2025-09-13T00:02:31.225739619Z" level=info msg="StartContainer for \"33bce7e2b7036929ffd5e28e5eb9932cbf460c777fb0ccea717f039b64271a8e\"" Sep 13 00:02:31.304741 systemd[1]: Started cri-containerd-33bce7e2b7036929ffd5e28e5eb9932cbf460c777fb0ccea717f039b64271a8e.scope - libcontainer container 33bce7e2b7036929ffd5e28e5eb9932cbf460c777fb0ccea717f039b64271a8e. Sep 13 00:02:31.390881 containerd[1714]: time="2025-09-13T00:02:31.390762225Z" level=info msg="StartContainer for \"33bce7e2b7036929ffd5e28e5eb9932cbf460c777fb0ccea717f039b64271a8e\" returns successfully" Sep 13 00:02:31.705455 kubelet[3139]: I0913 00:02:31.705294 3139 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:02:31.720914 kubelet[3139]: I0913 00:02:31.720855 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-9gfvn" podStartSLOduration=24.431735467 podStartE2EDuration="30.720837597s" podCreationTimestamp="2025-09-13 00:02:01 +0000 UTC" firstStartedPulling="2025-09-13 00:02:24.877726306 +0000 UTC m=+45.683407569" lastFinishedPulling="2025-09-13 00:02:31.166828396 +0000 UTC m=+51.972509699" observedRunningTime="2025-09-13 00:02:31.720791038 +0000 UTC m=+52.526472341" watchObservedRunningTime="2025-09-13 00:02:31.720837597 +0000 UTC m=+52.526518900" Sep 13 00:02:32.509654 containerd[1714]: time="2025-09-13T00:02:32.508940050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:32.512952 containerd[1714]: time="2025-09-13T00:02:32.512924924Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 13 00:02:32.531969 containerd[1714]: time="2025-09-13T00:02:32.531926572Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:32.538692 containerd[1714]: time="2025-09-13T00:02:32.538631881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:32.540272 containerd[1714]: time="2025-09-13T00:02:32.540153959Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.371808405s" Sep 13 00:02:32.540272 containerd[1714]: time="2025-09-13T00:02:32.540185079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 13 00:02:32.541768 containerd[1714]: time="2025-09-13T00:02:32.541743396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:02:32.543440 containerd[1714]: time="2025-09-13T00:02:32.543408753Z" level=info msg="CreateContainer within sandbox \"0b0a843ae894241a6b7cbd8a2aa904f8210396c3a3cc5b725ab6204e4381f0c6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:02:32.583381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2272770826.mount: Deactivated successfully. Sep 13 00:02:32.593976 systemd[1]: run-containerd-runc-k8s.io-33bce7e2b7036929ffd5e28e5eb9932cbf460c777fb0ccea717f039b64271a8e-runc.YYECrH.mount: Deactivated successfully. Sep 13 00:02:32.608590 containerd[1714]: time="2025-09-13T00:02:32.608456726Z" level=info msg="CreateContainer within sandbox \"0b0a843ae894241a6b7cbd8a2aa904f8210396c3a3cc5b725ab6204e4381f0c6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"cc49d3aff352290df82b8c3be60c022e318eec925e333cc66ef72096798cecde\"" Sep 13 00:02:32.612085 containerd[1714]: time="2025-09-13T00:02:32.610590802Z" level=info msg="StartContainer for \"cc49d3aff352290df82b8c3be60c022e318eec925e333cc66ef72096798cecde\"" Sep 13 00:02:32.656226 systemd[1]: Started cri-containerd-cc49d3aff352290df82b8c3be60c022e318eec925e333cc66ef72096798cecde.scope - libcontainer container cc49d3aff352290df82b8c3be60c022e318eec925e333cc66ef72096798cecde. Sep 13 00:02:32.692573 containerd[1714]: time="2025-09-13T00:02:32.692534713Z" level=info msg="StartContainer for \"cc49d3aff352290df82b8c3be60c022e318eec925e333cc66ef72096798cecde\" returns successfully" Sep 13 00:02:35.902765 kubelet[3139]: I0913 00:02:35.902724 3139 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:02:36.053647 kubelet[3139]: I0913 00:02:36.052470 3139 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:02:36.055064 containerd[1714]: time="2025-09-13T00:02:36.055007077Z" level=info msg="StopContainer for \"5383e02acfc6f52796e7b15f86ef4ade2057956d834e2afddb43f2a018cb94a0\" with timeout 30 (s)" Sep 13 00:02:36.056697 containerd[1714]: time="2025-09-13T00:02:36.056663434Z" level=info msg="Stop container \"5383e02acfc6f52796e7b15f86ef4ade2057956d834e2afddb43f2a018cb94a0\" with signal terminated" Sep 13 00:02:36.104198 systemd[1]: cri-containerd-5383e02acfc6f52796e7b15f86ef4ade2057956d834e2afddb43f2a018cb94a0.scope: Deactivated successfully. Sep 13 00:02:36.112316 systemd[1]: Created slice kubepods-besteffort-pod3542bd7c_821a_4fb7_8e5e_15baa845ed6b.slice - libcontainer container kubepods-besteffort-pod3542bd7c_821a_4fb7_8e5e_15baa845ed6b.slice. Sep 13 00:02:36.158097 containerd[1714]: time="2025-09-13T00:02:36.157585795Z" level=info msg="shim disconnected" id=5383e02acfc6f52796e7b15f86ef4ade2057956d834e2afddb43f2a018cb94a0 namespace=k8s.io Sep 13 00:02:36.158097 containerd[1714]: time="2025-09-13T00:02:36.157979234Z" level=warning msg="cleaning up after shim disconnected" id=5383e02acfc6f52796e7b15f86ef4ade2057956d834e2afddb43f2a018cb94a0 namespace=k8s.io Sep 13 00:02:36.158097 containerd[1714]: time="2025-09-13T00:02:36.157990714Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:02:36.158844 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5383e02acfc6f52796e7b15f86ef4ade2057956d834e2afddb43f2a018cb94a0-rootfs.mount: Deactivated successfully. Sep 13 00:02:36.284903 kubelet[3139]: I0913 00:02:36.284854 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkd6f\" (UniqueName: \"kubernetes.io/projected/3542bd7c-821a-4fb7-8e5e-15baa845ed6b-kube-api-access-qkd6f\") pod \"calico-apiserver-77b8b896d4-xhwnz\" (UID: \"3542bd7c-821a-4fb7-8e5e-15baa845ed6b\") " pod="calico-apiserver/calico-apiserver-77b8b896d4-xhwnz" Sep 13 00:02:36.285057 kubelet[3139]: I0913 00:02:36.284941 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3542bd7c-821a-4fb7-8e5e-15baa845ed6b-calico-apiserver-certs\") pod \"calico-apiserver-77b8b896d4-xhwnz\" (UID: \"3542bd7c-821a-4fb7-8e5e-15baa845ed6b\") " pod="calico-apiserver/calico-apiserver-77b8b896d4-xhwnz" Sep 13 00:02:37.036405 containerd[1714]: time="2025-09-13T00:02:37.036012526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77b8b896d4-xhwnz,Uid:3542bd7c-821a-4fb7-8e5e-15baa845ed6b,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:02:37.216594 containerd[1714]: time="2025-09-13T00:02:37.216551601Z" level=info msg="StopContainer for \"5383e02acfc6f52796e7b15f86ef4ade2057956d834e2afddb43f2a018cb94a0\" returns successfully" Sep 13 00:02:37.218657 containerd[1714]: time="2025-09-13T00:02:37.217399839Z" level=info msg="StopPodSandbox for \"69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec\"" Sep 13 00:02:37.218657 containerd[1714]: time="2025-09-13T00:02:37.217439479Z" level=info msg="Container to stop \"5383e02acfc6f52796e7b15f86ef4ade2057956d834e2afddb43f2a018cb94a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:02:37.221234 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec-shm.mount: Deactivated successfully. Sep 13 00:02:37.226805 systemd[1]: cri-containerd-69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec.scope: Deactivated successfully. Sep 13 00:02:37.231158 containerd[1714]: time="2025-09-13T00:02:37.231108938Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:37.236185 containerd[1714]: time="2025-09-13T00:02:37.235733570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 13 00:02:37.248811 containerd[1714]: time="2025-09-13T00:02:37.247779511Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:37.255155 containerd[1714]: time="2025-09-13T00:02:37.255111620Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:37.255356 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec-rootfs.mount: Deactivated successfully. Sep 13 00:02:37.255904 containerd[1714]: time="2025-09-13T00:02:37.255864459Z" level=info msg="shim disconnected" id=69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec namespace=k8s.io Sep 13 00:02:37.256144 containerd[1714]: time="2025-09-13T00:02:37.256086138Z" level=warning msg="cleaning up after shim disconnected" id=69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec namespace=k8s.io Sep 13 00:02:37.256144 containerd[1714]: time="2025-09-13T00:02:37.256138098Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:02:37.257087 containerd[1714]: time="2025-09-13T00:02:37.256924537Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 4.714955661s" Sep 13 00:02:37.257087 containerd[1714]: time="2025-09-13T00:02:37.256964257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 13 00:02:37.261280 containerd[1714]: time="2025-09-13T00:02:37.261231410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:02:37.278734 containerd[1714]: time="2025-09-13T00:02:37.278691183Z" level=info msg="CreateContainer within sandbox \"358f1899d134e484402438702720092ec59e5e915182df9bc1c74dd23c778e46\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:02:37.342923 containerd[1714]: time="2025-09-13T00:02:37.342796281Z" level=info msg="CreateContainer within sandbox \"358f1899d134e484402438702720092ec59e5e915182df9bc1c74dd23c778e46\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"84235533684eb779f6697a4afbcd60573d8a4069a3680cff0f3eef7cbcc5b724\"" Sep 13 00:02:37.346343 containerd[1714]: time="2025-09-13T00:02:37.346144876Z" level=info msg="StartContainer for \"84235533684eb779f6697a4afbcd60573d8a4069a3680cff0f3eef7cbcc5b724\"" Sep 13 00:02:37.394653 systemd[1]: Started cri-containerd-84235533684eb779f6697a4afbcd60573d8a4069a3680cff0f3eef7cbcc5b724.scope - libcontainer container 84235533684eb779f6697a4afbcd60573d8a4069a3680cff0f3eef7cbcc5b724. Sep 13 00:02:37.401651 systemd-networkd[1567]: calia19cf7a4b77: Link DOWN Sep 13 00:02:37.401666 systemd-networkd[1567]: calia19cf7a4b77: Lost carrier Sep 13 00:02:37.490244 systemd-networkd[1567]: cali3ad0cc1c94c: Link UP Sep 13 00:02:37.491755 systemd-networkd[1567]: cali3ad0cc1c94c: Gained carrier Sep 13 00:02:37.500889 containerd[1714]: time="2025-09-13T00:02:37.500844631Z" level=info msg="StartContainer for \"84235533684eb779f6697a4afbcd60573d8a4069a3680cff0f3eef7cbcc5b724\" returns successfully" Sep 13 00:02:37.517231 containerd[1714]: 2025-09-13 00:02:37.353 [INFO][6128] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--xhwnz-eth0 calico-apiserver-77b8b896d4- calico-apiserver 3542bd7c-821a-4fb7-8e5e-15baa845ed6b 1116 0 2025-09-13 00:02:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77b8b896d4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-n-a13ccab244 calico-apiserver-77b8b896d4-xhwnz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3ad0cc1c94c [] [] }} ContainerID="18482b37f9470ebf6d721406e45d5a3694eb57b27368b62de6b3e15585effcac" Namespace="calico-apiserver" Pod="calico-apiserver-77b8b896d4-xhwnz" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--xhwnz-" Sep 13 00:02:37.517231 containerd[1714]: 2025-09-13 00:02:37.353 [INFO][6128] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="18482b37f9470ebf6d721406e45d5a3694eb57b27368b62de6b3e15585effcac" Namespace="calico-apiserver" Pod="calico-apiserver-77b8b896d4-xhwnz" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--xhwnz-eth0" Sep 13 00:02:37.517231 containerd[1714]: 2025-09-13 00:02:37.414 [INFO][6162] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="18482b37f9470ebf6d721406e45d5a3694eb57b27368b62de6b3e15585effcac" HandleID="k8s-pod-network.18482b37f9470ebf6d721406e45d5a3694eb57b27368b62de6b3e15585effcac" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--xhwnz-eth0" Sep 13 00:02:37.517231 containerd[1714]: 2025-09-13 00:02:37.415 [INFO][6162] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="18482b37f9470ebf6d721406e45d5a3694eb57b27368b62de6b3e15585effcac" HandleID="k8s-pod-network.18482b37f9470ebf6d721406e45d5a3694eb57b27368b62de6b3e15585effcac" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--xhwnz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-n-a13ccab244", "pod":"calico-apiserver-77b8b896d4-xhwnz", "timestamp":"2025-09-13 00:02:37.414506088 +0000 UTC"}, Hostname:"ci-4081.3.5-n-a13ccab244", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:02:37.517231 containerd[1714]: 2025-09-13 00:02:37.415 [INFO][6162] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:37.517231 containerd[1714]: 2025-09-13 00:02:37.416 [INFO][6162] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:37.517231 containerd[1714]: 2025-09-13 00:02:37.417 [INFO][6162] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-a13ccab244' Sep 13 00:02:37.517231 containerd[1714]: 2025-09-13 00:02:37.433 [INFO][6162] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.18482b37f9470ebf6d721406e45d5a3694eb57b27368b62de6b3e15585effcac" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:37.517231 containerd[1714]: 2025-09-13 00:02:37.442 [INFO][6162] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:37.517231 containerd[1714]: 2025-09-13 00:02:37.448 [INFO][6162] ipam/ipam.go 511: Trying affinity for 192.168.38.128/26 host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:37.517231 containerd[1714]: 2025-09-13 00:02:37.451 [INFO][6162] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.128/26 host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:37.517231 containerd[1714]: 2025-09-13 00:02:37.454 [INFO][6162] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.128/26 host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:37.517231 containerd[1714]: 2025-09-13 00:02:37.454 [INFO][6162] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.38.128/26 handle="k8s-pod-network.18482b37f9470ebf6d721406e45d5a3694eb57b27368b62de6b3e15585effcac" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:37.517231 containerd[1714]: 2025-09-13 00:02:37.457 [INFO][6162] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.18482b37f9470ebf6d721406e45d5a3694eb57b27368b62de6b3e15585effcac Sep 13 00:02:37.517231 containerd[1714]: 2025-09-13 00:02:37.463 [INFO][6162] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.38.128/26 handle="k8s-pod-network.18482b37f9470ebf6d721406e45d5a3694eb57b27368b62de6b3e15585effcac" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:37.517231 containerd[1714]: 2025-09-13 00:02:37.477 [INFO][6162] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.38.138/26] block=192.168.38.128/26 handle="k8s-pod-network.18482b37f9470ebf6d721406e45d5a3694eb57b27368b62de6b3e15585effcac" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:37.517231 containerd[1714]: 2025-09-13 00:02:37.477 [INFO][6162] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.138/26] handle="k8s-pod-network.18482b37f9470ebf6d721406e45d5a3694eb57b27368b62de6b3e15585effcac" host="ci-4081.3.5-n-a13ccab244" Sep 13 00:02:37.517231 containerd[1714]: 2025-09-13 00:02:37.478 [INFO][6162] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:37.517231 containerd[1714]: 2025-09-13 00:02:37.478 [INFO][6162] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.38.138/26] IPv6=[] ContainerID="18482b37f9470ebf6d721406e45d5a3694eb57b27368b62de6b3e15585effcac" HandleID="k8s-pod-network.18482b37f9470ebf6d721406e45d5a3694eb57b27368b62de6b3e15585effcac" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--xhwnz-eth0" Sep 13 00:02:37.517783 containerd[1714]: 2025-09-13 00:02:37.481 [INFO][6128] cni-plugin/k8s.go 418: Populated endpoint ContainerID="18482b37f9470ebf6d721406e45d5a3694eb57b27368b62de6b3e15585effcac" Namespace="calico-apiserver" Pod="calico-apiserver-77b8b896d4-xhwnz" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--xhwnz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--xhwnz-eth0", GenerateName:"calico-apiserver-77b8b896d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"3542bd7c-821a-4fb7-8e5e-15baa845ed6b", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 2, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77b8b896d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"", Pod:"calico-apiserver-77b8b896d4-xhwnz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3ad0cc1c94c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:37.517783 containerd[1714]: 2025-09-13 00:02:37.481 [INFO][6128] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.138/32] ContainerID="18482b37f9470ebf6d721406e45d5a3694eb57b27368b62de6b3e15585effcac" Namespace="calico-apiserver" Pod="calico-apiserver-77b8b896d4-xhwnz" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--xhwnz-eth0" Sep 13 00:02:37.517783 containerd[1714]: 2025-09-13 00:02:37.481 [INFO][6128] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3ad0cc1c94c ContainerID="18482b37f9470ebf6d721406e45d5a3694eb57b27368b62de6b3e15585effcac" Namespace="calico-apiserver" Pod="calico-apiserver-77b8b896d4-xhwnz" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--xhwnz-eth0" Sep 13 00:02:37.517783 containerd[1714]: 2025-09-13 00:02:37.490 [INFO][6128] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="18482b37f9470ebf6d721406e45d5a3694eb57b27368b62de6b3e15585effcac" Namespace="calico-apiserver" Pod="calico-apiserver-77b8b896d4-xhwnz" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--xhwnz-eth0" Sep 13 00:02:37.517783 containerd[1714]: 2025-09-13 00:02:37.491 [INFO][6128] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="18482b37f9470ebf6d721406e45d5a3694eb57b27368b62de6b3e15585effcac" Namespace="calico-apiserver" Pod="calico-apiserver-77b8b896d4-xhwnz" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--xhwnz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--xhwnz-eth0", GenerateName:"calico-apiserver-77b8b896d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"3542bd7c-821a-4fb7-8e5e-15baa845ed6b", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 2, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77b8b896d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"18482b37f9470ebf6d721406e45d5a3694eb57b27368b62de6b3e15585effcac", Pod:"calico-apiserver-77b8b896d4-xhwnz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3ad0cc1c94c", MAC:"e2:34:95:8a:2b:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:37.517783 containerd[1714]: 2025-09-13 00:02:37.510 [INFO][6128] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="18482b37f9470ebf6d721406e45d5a3694eb57b27368b62de6b3e15585effcac" Namespace="calico-apiserver" Pod="calico-apiserver-77b8b896d4-xhwnz" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--xhwnz-eth0" Sep 13 00:02:37.567154 containerd[1714]: 2025-09-13 00:02:37.399 [INFO][6146] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" Sep 13 00:02:37.567154 containerd[1714]: 2025-09-13 00:02:37.399 [INFO][6146] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" iface="eth0" netns="/var/run/netns/cni-548fee37-0418-ae76-ed15-f52e3ae0903c" Sep 13 00:02:37.567154 containerd[1714]: 2025-09-13 00:02:37.400 [INFO][6146] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" iface="eth0" netns="/var/run/netns/cni-548fee37-0418-ae76-ed15-f52e3ae0903c" Sep 13 00:02:37.567154 containerd[1714]: 2025-09-13 00:02:37.415 [INFO][6146] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" after=16.126254ms iface="eth0" netns="/var/run/netns/cni-548fee37-0418-ae76-ed15-f52e3ae0903c" Sep 13 00:02:37.567154 containerd[1714]: 2025-09-13 00:02:37.415 [INFO][6146] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" Sep 13 00:02:37.567154 containerd[1714]: 2025-09-13 00:02:37.415 [INFO][6146] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" Sep 13 00:02:37.567154 containerd[1714]: 2025-09-13 00:02:37.459 [INFO][6192] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" HandleID="k8s-pod-network.69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" Sep 13 00:02:37.567154 containerd[1714]: 2025-09-13 00:02:37.459 [INFO][6192] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:37.567154 containerd[1714]: 2025-09-13 00:02:37.478 [INFO][6192] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:37.567154 containerd[1714]: 2025-09-13 00:02:37.556 [INFO][6192] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" HandleID="k8s-pod-network.69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" Sep 13 00:02:37.567154 containerd[1714]: 2025-09-13 00:02:37.556 [INFO][6192] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" HandleID="k8s-pod-network.69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" Sep 13 00:02:37.567154 containerd[1714]: 2025-09-13 00:02:37.558 [INFO][6192] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:37.567154 containerd[1714]: 2025-09-13 00:02:37.562 [INFO][6146] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" Sep 13 00:02:37.570382 containerd[1714]: time="2025-09-13T00:02:37.569945722Z" level=info msg="TearDown network for sandbox \"69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec\" successfully" Sep 13 00:02:37.570382 containerd[1714]: time="2025-09-13T00:02:37.569982522Z" level=info msg="StopPodSandbox for \"69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec\" returns successfully" Sep 13 00:02:37.570709 containerd[1714]: time="2025-09-13T00:02:37.570647801Z" level=info msg="StopPodSandbox for \"807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60\"" Sep 13 00:02:37.577854 containerd[1714]: time="2025-09-13T00:02:37.577472670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:02:37.577854 containerd[1714]: time="2025-09-13T00:02:37.577705230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:02:37.577854 containerd[1714]: time="2025-09-13T00:02:37.577716390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:37.578352 containerd[1714]: time="2025-09-13T00:02:37.578232549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:37.619751 systemd[1]: Started cri-containerd-18482b37f9470ebf6d721406e45d5a3694eb57b27368b62de6b3e15585effcac.scope - libcontainer container 18482b37f9470ebf6d721406e45d5a3694eb57b27368b62de6b3e15585effcac. Sep 13 00:02:37.704664 containerd[1714]: 2025-09-13 00:02:37.652 [WARNING][6259] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0", GenerateName:"calico-apiserver-5dd67bc454-", Namespace:"calico-apiserver", SelfLink:"", UID:"7951ccfd-1c72-4867-a215-dc5163f06c2d", ResourceVersion:"1124", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 1, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dd67bc454", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec", Pod:"calico-apiserver-5dd67bc454-q6cd7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia19cf7a4b77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:37.704664 containerd[1714]: 2025-09-13 00:02:37.653 [INFO][6259] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" Sep 13 00:02:37.704664 containerd[1714]: 2025-09-13 00:02:37.653 [INFO][6259] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" iface="eth0" netns="" Sep 13 00:02:37.704664 containerd[1714]: 2025-09-13 00:02:37.653 [INFO][6259] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" Sep 13 00:02:37.704664 containerd[1714]: 2025-09-13 00:02:37.653 [INFO][6259] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" Sep 13 00:02:37.704664 containerd[1714]: 2025-09-13 00:02:37.680 [INFO][6283] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" HandleID="k8s-pod-network.807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" Sep 13 00:02:37.704664 containerd[1714]: 2025-09-13 00:02:37.680 [INFO][6283] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:37.704664 containerd[1714]: 2025-09-13 00:02:37.680 [INFO][6283] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:37.704664 containerd[1714]: 2025-09-13 00:02:37.696 [WARNING][6283] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" HandleID="k8s-pod-network.807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" Sep 13 00:02:37.704664 containerd[1714]: 2025-09-13 00:02:37.696 [INFO][6283] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" HandleID="k8s-pod-network.807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" Sep 13 00:02:37.704664 containerd[1714]: 2025-09-13 00:02:37.700 [INFO][6283] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:37.704664 containerd[1714]: 2025-09-13 00:02:37.703 [INFO][6259] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" Sep 13 00:02:37.704664 containerd[1714]: time="2025-09-13T00:02:37.704535829Z" level=info msg="TearDown network for sandbox \"807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60\" successfully" Sep 13 00:02:37.704664 containerd[1714]: time="2025-09-13T00:02:37.704558749Z" level=info msg="StopPodSandbox for \"807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60\" returns successfully" Sep 13 00:02:37.708568 containerd[1714]: time="2025-09-13T00:02:37.708466703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77b8b896d4-xhwnz,Uid:3542bd7c-821a-4fb7-8e5e-15baa845ed6b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"18482b37f9470ebf6d721406e45d5a3694eb57b27368b62de6b3e15585effcac\"" Sep 13 00:02:37.713253 containerd[1714]: time="2025-09-13T00:02:37.713133576Z" level=info msg="CreateContainer within sandbox \"18482b37f9470ebf6d721406e45d5a3694eb57b27368b62de6b3e15585effcac\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:02:37.733930 kubelet[3139]: I0913 00:02:37.733835 3139 scope.go:117] "RemoveContainer" containerID="5383e02acfc6f52796e7b15f86ef4ade2057956d834e2afddb43f2a018cb94a0" Sep 13 00:02:37.740160 containerd[1714]: time="2025-09-13T00:02:37.739946533Z" level=info msg="RemoveContainer for \"5383e02acfc6f52796e7b15f86ef4ade2057956d834e2afddb43f2a018cb94a0\"" Sep 13 00:02:37.759471 containerd[1714]: time="2025-09-13T00:02:37.759423903Z" level=info msg="RemoveContainer for \"5383e02acfc6f52796e7b15f86ef4ade2057956d834e2afddb43f2a018cb94a0\" returns successfully" Sep 13 00:02:37.760976 kubelet[3139]: I0913 00:02:37.760933 3139 scope.go:117] "RemoveContainer" containerID="5383e02acfc6f52796e7b15f86ef4ade2057956d834e2afddb43f2a018cb94a0" Sep 13 00:02:37.761314 containerd[1714]: time="2025-09-13T00:02:37.761265620Z" level=error msg="ContainerStatus for \"5383e02acfc6f52796e7b15f86ef4ade2057956d834e2afddb43f2a018cb94a0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5383e02acfc6f52796e7b15f86ef4ade2057956d834e2afddb43f2a018cb94a0\": not found" Sep 13 00:02:37.769117 kubelet[3139]: E0913 00:02:37.769065 3139 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5383e02acfc6f52796e7b15f86ef4ade2057956d834e2afddb43f2a018cb94a0\": not found" containerID="5383e02acfc6f52796e7b15f86ef4ade2057956d834e2afddb43f2a018cb94a0" Sep 13 00:02:37.770282 containerd[1714]: time="2025-09-13T00:02:37.770244125Z" level=info msg="CreateContainer within sandbox \"18482b37f9470ebf6d721406e45d5a3694eb57b27368b62de6b3e15585effcac\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2e076923838cedf5d1fbd5c0a1dac021d94edbe81d65e4a5684be3e80a6e9843\"" Sep 13 00:02:37.770657 kubelet[3139]: I0913 00:02:37.770619 3139 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5383e02acfc6f52796e7b15f86ef4ade2057956d834e2afddb43f2a018cb94a0"} err="failed to get container status \"5383e02acfc6f52796e7b15f86ef4ade2057956d834e2afddb43f2a018cb94a0\": rpc error: code = NotFound desc = an error occurred when try to find container \"5383e02acfc6f52796e7b15f86ef4ade2057956d834e2afddb43f2a018cb94a0\": not found" Sep 13 00:02:37.773443 containerd[1714]: time="2025-09-13T00:02:37.771255924Z" level=info msg="StartContainer for \"2e076923838cedf5d1fbd5c0a1dac021d94edbe81d65e4a5684be3e80a6e9843\"" Sep 13 00:02:37.795685 kubelet[3139]: I0913 00:02:37.795638 3139 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7951ccfd-1c72-4867-a215-dc5163f06c2d-calico-apiserver-certs\") pod \"7951ccfd-1c72-4867-a215-dc5163f06c2d\" (UID: \"7951ccfd-1c72-4867-a215-dc5163f06c2d\") " Sep 13 00:02:37.795685 kubelet[3139]: I0913 00:02:37.795679 3139 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6bww\" (UniqueName: \"kubernetes.io/projected/7951ccfd-1c72-4867-a215-dc5163f06c2d-kube-api-access-k6bww\") pod \"7951ccfd-1c72-4867-a215-dc5163f06c2d\" (UID: \"7951ccfd-1c72-4867-a215-dc5163f06c2d\") " Sep 13 00:02:37.806166 kubelet[3139]: I0913 00:02:37.805426 3139 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7951ccfd-1c72-4867-a215-dc5163f06c2d-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "7951ccfd-1c72-4867-a215-dc5163f06c2d" (UID: "7951ccfd-1c72-4867-a215-dc5163f06c2d"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:02:37.806986 kubelet[3139]: I0913 00:02:37.806948 3139 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7951ccfd-1c72-4867-a215-dc5163f06c2d-kube-api-access-k6bww" (OuterVolumeSpecName: "kube-api-access-k6bww") pod "7951ccfd-1c72-4867-a215-dc5163f06c2d" (UID: "7951ccfd-1c72-4867-a215-dc5163f06c2d"). InnerVolumeSpecName "kube-api-access-k6bww". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:02:37.816290 systemd[1]: Started cri-containerd-2e076923838cedf5d1fbd5c0a1dac021d94edbe81d65e4a5684be3e80a6e9843.scope - libcontainer container 2e076923838cedf5d1fbd5c0a1dac021d94edbe81d65e4a5684be3e80a6e9843. Sep 13 00:02:37.880701 kubelet[3139]: I0913 00:02:37.880417 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8447f55f8b-9w8q9" podStartSLOduration=28.521176654 podStartE2EDuration="36.880396631s" podCreationTimestamp="2025-09-13 00:02:01 +0000 UTC" firstStartedPulling="2025-09-13 00:02:28.899985876 +0000 UTC m=+49.705667179" lastFinishedPulling="2025-09-13 00:02:37.259205893 +0000 UTC m=+58.064887156" observedRunningTime="2025-09-13 00:02:37.768616208 +0000 UTC m=+58.574297591" watchObservedRunningTime="2025-09-13 00:02:37.880396631 +0000 UTC m=+58.686077894" Sep 13 00:02:37.896065 kubelet[3139]: I0913 00:02:37.896000 3139 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6bww\" (UniqueName: \"kubernetes.io/projected/7951ccfd-1c72-4867-a215-dc5163f06c2d-kube-api-access-k6bww\") on node \"ci-4081.3.5-n-a13ccab244\" DevicePath \"\"" Sep 13 00:02:37.896065 kubelet[3139]: I0913 00:02:37.896057 3139 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7951ccfd-1c72-4867-a215-dc5163f06c2d-calico-apiserver-certs\") on node \"ci-4081.3.5-n-a13ccab244\" DevicePath \"\"" Sep 13 00:02:37.898007 containerd[1714]: time="2025-09-13T00:02:37.897956804Z" level=info msg="StartContainer for \"2e076923838cedf5d1fbd5c0a1dac021d94edbe81d65e4a5684be3e80a6e9843\" returns successfully" Sep 13 00:02:38.041939 systemd[1]: Removed slice kubepods-besteffort-pod7951ccfd_1c72_4867_a215_dc5163f06c2d.slice - libcontainer container kubepods-besteffort-pod7951ccfd_1c72_4867_a215_dc5163f06c2d.slice. Sep 13 00:02:38.223612 systemd[1]: run-netns-cni\x2d548fee37\x2d0418\x2dae76\x2ded15\x2df52e3ae0903c.mount: Deactivated successfully. Sep 13 00:02:38.224110 systemd[1]: var-lib-kubelet-pods-7951ccfd\x2d1c72\x2d4867\x2da215\x2ddc5163f06c2d-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 13 00:02:38.224292 systemd[1]: var-lib-kubelet-pods-7951ccfd\x2d1c72\x2d4867\x2da215\x2ddc5163f06c2d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk6bww.mount: Deactivated successfully. Sep 13 00:02:38.699073 containerd[1714]: time="2025-09-13T00:02:38.698690218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:38.702937 containerd[1714]: time="2025-09-13T00:02:38.702835531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 13 00:02:38.707144 containerd[1714]: time="2025-09-13T00:02:38.706964485Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:38.713399 containerd[1714]: time="2025-09-13T00:02:38.713351194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:38.714454 containerd[1714]: time="2025-09-13T00:02:38.714416593Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.453146063s" Sep 13 00:02:38.714454 containerd[1714]: time="2025-09-13T00:02:38.714452793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 13 00:02:38.716735 containerd[1714]: time="2025-09-13T00:02:38.716697549Z" level=info msg="CreateContainer within sandbox \"0b0a843ae894241a6b7cbd8a2aa904f8210396c3a3cc5b725ab6204e4381f0c6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:02:38.760123 containerd[1714]: time="2025-09-13T00:02:38.760077601Z" level=info msg="CreateContainer within sandbox \"0b0a843ae894241a6b7cbd8a2aa904f8210396c3a3cc5b725ab6204e4381f0c6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"197454cba02cc29c60e2058a2821a0d35786f4979aa138cdfbeb148c2fbf4e3a\"" Sep 13 00:02:38.761227 containerd[1714]: time="2025-09-13T00:02:38.761183599Z" level=info msg="StartContainer for \"197454cba02cc29c60e2058a2821a0d35786f4979aa138cdfbeb148c2fbf4e3a\"" Sep 13 00:02:38.765401 kubelet[3139]: I0913 00:02:38.765338 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77b8b896d4-xhwnz" podStartSLOduration=2.7653186720000003 podStartE2EDuration="2.765318672s" podCreationTimestamp="2025-09-13 00:02:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:02:38.762707516 +0000 UTC m=+59.568388819" watchObservedRunningTime="2025-09-13 00:02:38.765318672 +0000 UTC m=+59.570999975" Sep 13 00:02:38.824190 systemd[1]: Started cri-containerd-197454cba02cc29c60e2058a2821a0d35786f4979aa138cdfbeb148c2fbf4e3a.scope - libcontainer container 197454cba02cc29c60e2058a2821a0d35786f4979aa138cdfbeb148c2fbf4e3a. Sep 13 00:02:38.871055 containerd[1714]: time="2025-09-13T00:02:38.870988465Z" level=info msg="StartContainer for \"197454cba02cc29c60e2058a2821a0d35786f4979aa138cdfbeb148c2fbf4e3a\" returns successfully" Sep 13 00:02:38.889326 systemd-networkd[1567]: cali3ad0cc1c94c: Gained IPv6LL Sep 13 00:02:39.317130 containerd[1714]: time="2025-09-13T00:02:39.317088400Z" level=info msg="StopPodSandbox for \"a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe\"" Sep 13 00:02:39.351639 kubelet[3139]: I0913 00:02:39.350968 3139 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7951ccfd-1c72-4867-a215-dc5163f06c2d" path="/var/lib/kubelet/pods/7951ccfd-1c72-4867-a215-dc5163f06c2d/volumes" Sep 13 00:02:39.406749 containerd[1714]: 2025-09-13 00:02:39.356 [WARNING][6415] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wm5tv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"0aaf6e3b-2112-4be4-8bd2-d8200dc2d876", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 1, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"ae22384017e810641a7002d521cf77eb5399f5435979b3f4c4da77431815bcd8", Pod:"coredns-7c65d6cfc9-wm5tv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali498eccb5e3c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:39.406749 containerd[1714]: 2025-09-13 00:02:39.357 [INFO][6415] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" Sep 13 00:02:39.406749 containerd[1714]: 2025-09-13 00:02:39.357 [INFO][6415] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" iface="eth0" netns="" Sep 13 00:02:39.406749 containerd[1714]: 2025-09-13 00:02:39.357 [INFO][6415] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" Sep 13 00:02:39.406749 containerd[1714]: 2025-09-13 00:02:39.357 [INFO][6415] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" Sep 13 00:02:39.406749 containerd[1714]: 2025-09-13 00:02:39.385 [INFO][6425] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" HandleID="k8s-pod-network.a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" Workload="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wm5tv-eth0" Sep 13 00:02:39.406749 containerd[1714]: 2025-09-13 00:02:39.386 [INFO][6425] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:39.406749 containerd[1714]: 2025-09-13 00:02:39.387 [INFO][6425] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:39.406749 containerd[1714]: 2025-09-13 00:02:39.401 [WARNING][6425] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" HandleID="k8s-pod-network.a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" Workload="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wm5tv-eth0" Sep 13 00:02:39.406749 containerd[1714]: 2025-09-13 00:02:39.401 [INFO][6425] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" HandleID="k8s-pod-network.a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" Workload="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wm5tv-eth0" Sep 13 00:02:39.406749 containerd[1714]: 2025-09-13 00:02:39.403 [INFO][6425] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:39.406749 containerd[1714]: 2025-09-13 00:02:39.405 [INFO][6415] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" Sep 13 00:02:39.407333 containerd[1714]: time="2025-09-13T00:02:39.407305697Z" level=info msg="TearDown network for sandbox \"a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe\" successfully" Sep 13 00:02:39.407465 containerd[1714]: time="2025-09-13T00:02:39.407379977Z" level=info msg="StopPodSandbox for \"a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe\" returns successfully" Sep 13 00:02:39.408136 containerd[1714]: time="2025-09-13T00:02:39.408107256Z" level=info msg="RemovePodSandbox for \"a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe\"" Sep 13 00:02:39.408196 containerd[1714]: time="2025-09-13T00:02:39.408147056Z" level=info msg="Forcibly stopping sandbox \"a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe\"" Sep 13 00:02:39.455624 kubelet[3139]: I0913 00:02:39.455546 3139 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:02:39.459350 kubelet[3139]: I0913 00:02:39.458932 3139 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:02:39.495896 containerd[1714]: 2025-09-13 00:02:39.440 [WARNING][6440] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wm5tv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"0aaf6e3b-2112-4be4-8bd2-d8200dc2d876", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 1, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"ae22384017e810641a7002d521cf77eb5399f5435979b3f4c4da77431815bcd8", Pod:"coredns-7c65d6cfc9-wm5tv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali498eccb5e3c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:39.495896 containerd[1714]: 2025-09-13 00:02:39.440 [INFO][6440] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" Sep 13 00:02:39.495896 containerd[1714]: 2025-09-13 00:02:39.440 [INFO][6440] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" iface="eth0" netns="" Sep 13 00:02:39.495896 containerd[1714]: 2025-09-13 00:02:39.440 [INFO][6440] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" Sep 13 00:02:39.495896 containerd[1714]: 2025-09-13 00:02:39.440 [INFO][6440] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" Sep 13 00:02:39.495896 containerd[1714]: 2025-09-13 00:02:39.465 [INFO][6447] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" HandleID="k8s-pod-network.a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" Workload="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wm5tv-eth0" Sep 13 00:02:39.495896 containerd[1714]: 2025-09-13 00:02:39.466 [INFO][6447] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:39.495896 containerd[1714]: 2025-09-13 00:02:39.466 [INFO][6447] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:39.495896 containerd[1714]: 2025-09-13 00:02:39.486 [WARNING][6447] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" HandleID="k8s-pod-network.a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" Workload="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wm5tv-eth0" Sep 13 00:02:39.495896 containerd[1714]: 2025-09-13 00:02:39.486 [INFO][6447] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" HandleID="k8s-pod-network.a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" Workload="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wm5tv-eth0" Sep 13 00:02:39.495896 containerd[1714]: 2025-09-13 00:02:39.490 [INFO][6447] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:39.495896 containerd[1714]: 2025-09-13 00:02:39.493 [INFO][6440] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe" Sep 13 00:02:39.496324 containerd[1714]: time="2025-09-13T00:02:39.495945877Z" level=info msg="TearDown network for sandbox \"a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe\" successfully" Sep 13 00:02:39.512771 containerd[1714]: time="2025-09-13T00:02:39.512710051Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:02:39.512909 containerd[1714]: time="2025-09-13T00:02:39.512877410Z" level=info msg="RemovePodSandbox \"a40af8bcd87db90cb1dc5121c4bf81444d9f5b8ea9b13fe3dedab7f4a9739efe\" returns successfully" Sep 13 00:02:39.514307 containerd[1714]: time="2025-09-13T00:02:39.514269488Z" level=info msg="StopPodSandbox for \"b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b\"" Sep 13 00:02:39.606438 containerd[1714]: 2025-09-13 00:02:39.564 [WARNING][6463] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0", GenerateName:"calico-apiserver-5dd67bc454-", Namespace:"calico-apiserver", SelfLink:"", UID:"c293651a-dfe1-4de8-a4dd-be90531f8a49", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 1, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dd67bc454", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40", Pod:"calico-apiserver-5dd67bc454-rxrtp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali31c167188ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:39.606438 containerd[1714]: 2025-09-13 00:02:39.564 [INFO][6463] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" Sep 13 00:02:39.606438 containerd[1714]: 2025-09-13 00:02:39.564 [INFO][6463] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" iface="eth0" netns="" Sep 13 00:02:39.606438 containerd[1714]: 2025-09-13 00:02:39.564 [INFO][6463] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" Sep 13 00:02:39.606438 containerd[1714]: 2025-09-13 00:02:39.564 [INFO][6463] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" Sep 13 00:02:39.606438 containerd[1714]: 2025-09-13 00:02:39.593 [INFO][6470] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" HandleID="k8s-pod-network.b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0" Sep 13 00:02:39.606438 containerd[1714]: 2025-09-13 00:02:39.593 [INFO][6470] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:39.606438 containerd[1714]: 2025-09-13 00:02:39.593 [INFO][6470] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:39.606438 containerd[1714]: 2025-09-13 00:02:39.601 [WARNING][6470] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" HandleID="k8s-pod-network.b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0" Sep 13 00:02:39.606438 containerd[1714]: 2025-09-13 00:02:39.601 [INFO][6470] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" HandleID="k8s-pod-network.b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0" Sep 13 00:02:39.606438 containerd[1714]: 2025-09-13 00:02:39.603 [INFO][6470] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:39.606438 containerd[1714]: 2025-09-13 00:02:39.604 [INFO][6463] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" Sep 13 00:02:39.606820 containerd[1714]: time="2025-09-13T00:02:39.606449903Z" level=info msg="TearDown network for sandbox \"b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b\" successfully" Sep 13 00:02:39.607570 containerd[1714]: time="2025-09-13T00:02:39.607539141Z" level=info msg="StopPodSandbox for \"b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b\" returns successfully" Sep 13 00:02:39.608156 containerd[1714]: time="2025-09-13T00:02:39.608132340Z" level=info msg="RemovePodSandbox for \"b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b\"" Sep 13 00:02:39.608196 containerd[1714]: time="2025-09-13T00:02:39.608165900Z" level=info msg="Forcibly stopping sandbox \"b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b\"" Sep 13 00:02:39.671803 containerd[1714]: 2025-09-13 00:02:39.639 [WARNING][6485] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0", GenerateName:"calico-apiserver-5dd67bc454-", Namespace:"calico-apiserver", SelfLink:"", UID:"c293651a-dfe1-4de8-a4dd-be90531f8a49", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 1, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dd67bc454", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40", Pod:"calico-apiserver-5dd67bc454-rxrtp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali31c167188ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:39.671803 containerd[1714]: 2025-09-13 00:02:39.640 [INFO][6485] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" Sep 13 00:02:39.671803 containerd[1714]: 2025-09-13 00:02:39.640 [INFO][6485] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" iface="eth0" netns="" Sep 13 00:02:39.671803 containerd[1714]: 2025-09-13 00:02:39.640 [INFO][6485] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" Sep 13 00:02:39.671803 containerd[1714]: 2025-09-13 00:02:39.640 [INFO][6485] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" Sep 13 00:02:39.671803 containerd[1714]: 2025-09-13 00:02:39.658 [INFO][6492] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" HandleID="k8s-pod-network.b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0" Sep 13 00:02:39.671803 containerd[1714]: 2025-09-13 00:02:39.658 [INFO][6492] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:39.671803 containerd[1714]: 2025-09-13 00:02:39.658 [INFO][6492] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:39.671803 containerd[1714]: 2025-09-13 00:02:39.667 [WARNING][6492] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" HandleID="k8s-pod-network.b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0" Sep 13 00:02:39.671803 containerd[1714]: 2025-09-13 00:02:39.667 [INFO][6492] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" HandleID="k8s-pod-network.b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0" Sep 13 00:02:39.671803 containerd[1714]: 2025-09-13 00:02:39.668 [INFO][6492] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:39.671803 containerd[1714]: 2025-09-13 00:02:39.670 [INFO][6485] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b" Sep 13 00:02:39.672334 containerd[1714]: time="2025-09-13T00:02:39.671838759Z" level=info msg="TearDown network for sandbox \"b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b\" successfully" Sep 13 00:02:39.682661 containerd[1714]: time="2025-09-13T00:02:39.682616142Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:02:39.682762 containerd[1714]: time="2025-09-13T00:02:39.682692982Z" level=info msg="RemovePodSandbox \"b7432d4b394985289e82e59ddf4f61935a36d0ff24a272ff4c220e0c5c66058b\" returns successfully" Sep 13 00:02:39.683189 containerd[1714]: time="2025-09-13T00:02:39.683155781Z" level=info msg="StopPodSandbox for \"3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652\"" Sep 13 00:02:39.767096 kubelet[3139]: I0913 00:02:39.767065 3139 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:02:39.782799 containerd[1714]: 2025-09-13 00:02:39.732 [WARNING][6506] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--dwdsk-eth0", GenerateName:"calico-apiserver-77b8b896d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"6bd411cd-c359-4d2d-9237-39cca6617339", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 1, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77b8b896d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"f092d12b6c48736ed5e7263f33fe481b9a9ac48fce9bff2354295eae3237d8ef", Pod:"calico-apiserver-77b8b896d4-dwdsk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali395485cfe31", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:39.782799 containerd[1714]: 2025-09-13 00:02:39.736 [INFO][6506] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" Sep 13 00:02:39.782799 containerd[1714]: 2025-09-13 00:02:39.736 [INFO][6506] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" iface="eth0" netns="" Sep 13 00:02:39.782799 containerd[1714]: 2025-09-13 00:02:39.736 [INFO][6506] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" Sep 13 00:02:39.782799 containerd[1714]: 2025-09-13 00:02:39.736 [INFO][6506] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" Sep 13 00:02:39.782799 containerd[1714]: 2025-09-13 00:02:39.758 [INFO][6513] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" HandleID="k8s-pod-network.3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--dwdsk-eth0" Sep 13 00:02:39.782799 containerd[1714]: 2025-09-13 00:02:39.758 [INFO][6513] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:39.782799 containerd[1714]: 2025-09-13 00:02:39.758 [INFO][6513] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:39.782799 containerd[1714]: 2025-09-13 00:02:39.775 [WARNING][6513] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" HandleID="k8s-pod-network.3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--dwdsk-eth0" Sep 13 00:02:39.782799 containerd[1714]: 2025-09-13 00:02:39.775 [INFO][6513] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" HandleID="k8s-pod-network.3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--dwdsk-eth0" Sep 13 00:02:39.782799 containerd[1714]: 2025-09-13 00:02:39.777 [INFO][6513] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:39.782799 containerd[1714]: 2025-09-13 00:02:39.780 [INFO][6506] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" Sep 13 00:02:39.782799 containerd[1714]: time="2025-09-13T00:02:39.782677624Z" level=info msg="TearDown network for sandbox \"3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652\" successfully" Sep 13 00:02:39.782799 containerd[1714]: time="2025-09-13T00:02:39.782706264Z" level=info msg="StopPodSandbox for \"3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652\" returns successfully" Sep 13 00:02:39.783625 containerd[1714]: time="2025-09-13T00:02:39.783400543Z" level=info msg="RemovePodSandbox for \"3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652\"" Sep 13 00:02:39.783625 containerd[1714]: time="2025-09-13T00:02:39.783443463Z" level=info msg="Forcibly stopping sandbox \"3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652\"" Sep 13 00:02:39.791910 kubelet[3139]: I0913 00:02:39.790547 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-xm8sj" podStartSLOduration=27.217852731 podStartE2EDuration="38.790529692s" podCreationTimestamp="2025-09-13 00:02:01 +0000 UTC" firstStartedPulling="2025-09-13 00:02:27.14257531 +0000 UTC m=+47.948256613" lastFinishedPulling="2025-09-13 00:02:38.715252271 +0000 UTC m=+59.520933574" observedRunningTime="2025-09-13 00:02:39.788590535 +0000 UTC m=+60.594271878" watchObservedRunningTime="2025-09-13 00:02:39.790529692 +0000 UTC m=+60.596210995" Sep 13 00:02:39.856688 containerd[1714]: 2025-09-13 00:02:39.821 [WARNING][6528] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--dwdsk-eth0", GenerateName:"calico-apiserver-77b8b896d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"6bd411cd-c359-4d2d-9237-39cca6617339", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 1, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77b8b896d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"f092d12b6c48736ed5e7263f33fe481b9a9ac48fce9bff2354295eae3237d8ef", Pod:"calico-apiserver-77b8b896d4-dwdsk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali395485cfe31", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:39.856688 containerd[1714]: 2025-09-13 00:02:39.822 [INFO][6528] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" Sep 13 00:02:39.856688 containerd[1714]: 2025-09-13 00:02:39.822 [INFO][6528] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" iface="eth0" netns="" Sep 13 00:02:39.856688 containerd[1714]: 2025-09-13 00:02:39.822 [INFO][6528] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" Sep 13 00:02:39.856688 containerd[1714]: 2025-09-13 00:02:39.822 [INFO][6528] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" Sep 13 00:02:39.856688 containerd[1714]: 2025-09-13 00:02:39.839 [INFO][6535] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" HandleID="k8s-pod-network.3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--dwdsk-eth0" Sep 13 00:02:39.856688 containerd[1714]: 2025-09-13 00:02:39.839 [INFO][6535] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:39.856688 containerd[1714]: 2025-09-13 00:02:39.839 [INFO][6535] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:39.856688 containerd[1714]: 2025-09-13 00:02:39.852 [WARNING][6535] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" HandleID="k8s-pod-network.3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--dwdsk-eth0" Sep 13 00:02:39.856688 containerd[1714]: 2025-09-13 00:02:39.852 [INFO][6535] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" HandleID="k8s-pod-network.3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--77b8b896d4--dwdsk-eth0" Sep 13 00:02:39.856688 containerd[1714]: 2025-09-13 00:02:39.853 [INFO][6535] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:39.856688 containerd[1714]: 2025-09-13 00:02:39.855 [INFO][6528] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652" Sep 13 00:02:39.856688 containerd[1714]: time="2025-09-13T00:02:39.856653627Z" level=info msg="TearDown network for sandbox \"3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652\" successfully" Sep 13 00:02:39.870231 containerd[1714]: time="2025-09-13T00:02:39.870181926Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:02:39.870459 containerd[1714]: time="2025-09-13T00:02:39.870262445Z" level=info msg="RemovePodSandbox \"3375c91854b6104522d2d825899e6319bf11e5cb8d422eccdfa0b733282a1652\" returns successfully" Sep 13 00:02:39.871054 containerd[1714]: time="2025-09-13T00:02:39.870755325Z" level=info msg="StopPodSandbox for \"c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8\"" Sep 13 00:02:39.937515 containerd[1714]: 2025-09-13 00:02:39.904 [WARNING][6549] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-calico--kube--controllers--8447f55f8b--9w8q9-eth0", GenerateName:"calico-kube-controllers-8447f55f8b-", Namespace:"calico-system", SelfLink:"", UID:"ab136b13-4df9-4d75-8b01-4675fa58dba1", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8447f55f8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"358f1899d134e484402438702720092ec59e5e915182df9bc1c74dd23c778e46", Pod:"calico-kube-controllers-8447f55f8b-9w8q9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.38.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib66f97ec93d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:39.937515 containerd[1714]: 2025-09-13 00:02:39.905 [INFO][6549] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" Sep 13 00:02:39.937515 containerd[1714]: 2025-09-13 00:02:39.905 [INFO][6549] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" iface="eth0" netns="" Sep 13 00:02:39.937515 containerd[1714]: 2025-09-13 00:02:39.905 [INFO][6549] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" Sep 13 00:02:39.937515 containerd[1714]: 2025-09-13 00:02:39.905 [INFO][6549] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" Sep 13 00:02:39.937515 containerd[1714]: 2025-09-13 00:02:39.924 [INFO][6557] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" HandleID="k8s-pod-network.c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--kube--controllers--8447f55f8b--9w8q9-eth0" Sep 13 00:02:39.937515 containerd[1714]: 2025-09-13 00:02:39.924 [INFO][6557] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:39.937515 containerd[1714]: 2025-09-13 00:02:39.924 [INFO][6557] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:39.937515 containerd[1714]: 2025-09-13 00:02:39.932 [WARNING][6557] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" HandleID="k8s-pod-network.c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--kube--controllers--8447f55f8b--9w8q9-eth0" Sep 13 00:02:39.937515 containerd[1714]: 2025-09-13 00:02:39.932 [INFO][6557] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" HandleID="k8s-pod-network.c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--kube--controllers--8447f55f8b--9w8q9-eth0" Sep 13 00:02:39.937515 containerd[1714]: 2025-09-13 00:02:39.934 [INFO][6557] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:39.937515 containerd[1714]: 2025-09-13 00:02:39.935 [INFO][6549] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" Sep 13 00:02:39.937969 containerd[1714]: time="2025-09-13T00:02:39.937554619Z" level=info msg="TearDown network for sandbox \"c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8\" successfully" Sep 13 00:02:39.937969 containerd[1714]: time="2025-09-13T00:02:39.937590579Z" level=info msg="StopPodSandbox for \"c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8\" returns successfully" Sep 13 00:02:39.938231 containerd[1714]: time="2025-09-13T00:02:39.938207698Z" level=info msg="RemovePodSandbox for \"c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8\"" Sep 13 00:02:39.938279 containerd[1714]: time="2025-09-13T00:02:39.938240378Z" level=info msg="Forcibly stopping sandbox \"c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8\"" Sep 13 00:02:40.010906 containerd[1714]: 2025-09-13 00:02:39.969 [WARNING][6571] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-calico--kube--controllers--8447f55f8b--9w8q9-eth0", GenerateName:"calico-kube-controllers-8447f55f8b-", Namespace:"calico-system", SelfLink:"", UID:"ab136b13-4df9-4d75-8b01-4675fa58dba1", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8447f55f8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"358f1899d134e484402438702720092ec59e5e915182df9bc1c74dd23c778e46", Pod:"calico-kube-controllers-8447f55f8b-9w8q9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.38.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib66f97ec93d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:40.010906 containerd[1714]: 2025-09-13 00:02:39.969 [INFO][6571] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" Sep 13 00:02:40.010906 containerd[1714]: 2025-09-13 00:02:39.969 [INFO][6571] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" iface="eth0" netns="" Sep 13 00:02:40.010906 containerd[1714]: 2025-09-13 00:02:39.969 [INFO][6571] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" Sep 13 00:02:40.010906 containerd[1714]: 2025-09-13 00:02:39.969 [INFO][6571] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" Sep 13 00:02:40.010906 containerd[1714]: 2025-09-13 00:02:39.991 [INFO][6578] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" HandleID="k8s-pod-network.c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--kube--controllers--8447f55f8b--9w8q9-eth0" Sep 13 00:02:40.010906 containerd[1714]: 2025-09-13 00:02:39.991 [INFO][6578] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:40.010906 containerd[1714]: 2025-09-13 00:02:39.991 [INFO][6578] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:40.010906 containerd[1714]: 2025-09-13 00:02:40.005 [WARNING][6578] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" HandleID="k8s-pod-network.c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--kube--controllers--8447f55f8b--9w8q9-eth0" Sep 13 00:02:40.010906 containerd[1714]: 2025-09-13 00:02:40.005 [INFO][6578] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" HandleID="k8s-pod-network.c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--kube--controllers--8447f55f8b--9w8q9-eth0" Sep 13 00:02:40.010906 containerd[1714]: 2025-09-13 00:02:40.007 [INFO][6578] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:40.010906 containerd[1714]: 2025-09-13 00:02:40.009 [INFO][6571] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8" Sep 13 00:02:40.011327 containerd[1714]: time="2025-09-13T00:02:40.010947503Z" level=info msg="TearDown network for sandbox \"c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8\" successfully" Sep 13 00:02:40.025165 containerd[1714]: time="2025-09-13T00:02:40.025118561Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:02:40.025298 containerd[1714]: time="2025-09-13T00:02:40.025198521Z" level=info msg="RemovePodSandbox \"c94244e508898d8531f5c2f8daf58e39611baff8ea98739a24bdb34b647425b8\" returns successfully" Sep 13 00:02:40.025687 containerd[1714]: time="2025-09-13T00:02:40.025662040Z" level=info msg="StopPodSandbox for \"8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd\"" Sep 13 00:02:40.099917 containerd[1714]: 2025-09-13 00:02:40.068 [WARNING][6592] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-goldmane--7988f88666--9gfvn-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"43be8aa2-8f03-4619-8b2e-6d113822f84e", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"45c1e6b471ae2216ed87136b7531057f2a88105a16f2764e27ae24711a07975a", Pod:"goldmane-7988f88666-9gfvn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.38.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0c7f4ded509", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:40.099917 containerd[1714]: 2025-09-13 00:02:40.068 [INFO][6592] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" Sep 13 00:02:40.099917 containerd[1714]: 2025-09-13 00:02:40.068 [INFO][6592] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" iface="eth0" netns="" Sep 13 00:02:40.099917 containerd[1714]: 2025-09-13 00:02:40.068 [INFO][6592] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" Sep 13 00:02:40.099917 containerd[1714]: 2025-09-13 00:02:40.068 [INFO][6592] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" Sep 13 00:02:40.099917 containerd[1714]: 2025-09-13 00:02:40.085 [INFO][6599] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" HandleID="k8s-pod-network.8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" Workload="ci--4081.3.5--n--a13ccab244-k8s-goldmane--7988f88666--9gfvn-eth0" Sep 13 00:02:40.099917 containerd[1714]: 2025-09-13 00:02:40.085 [INFO][6599] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:40.099917 containerd[1714]: 2025-09-13 00:02:40.085 [INFO][6599] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:40.099917 containerd[1714]: 2025-09-13 00:02:40.094 [WARNING][6599] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" HandleID="k8s-pod-network.8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" Workload="ci--4081.3.5--n--a13ccab244-k8s-goldmane--7988f88666--9gfvn-eth0" Sep 13 00:02:40.099917 containerd[1714]: 2025-09-13 00:02:40.094 [INFO][6599] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" HandleID="k8s-pod-network.8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" Workload="ci--4081.3.5--n--a13ccab244-k8s-goldmane--7988f88666--9gfvn-eth0" Sep 13 00:02:40.099917 containerd[1714]: 2025-09-13 00:02:40.095 [INFO][6599] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:40.099917 containerd[1714]: 2025-09-13 00:02:40.097 [INFO][6592] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" Sep 13 00:02:40.100344 containerd[1714]: time="2025-09-13T00:02:40.099975522Z" level=info msg="TearDown network for sandbox \"8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd\" successfully" Sep 13 00:02:40.100344 containerd[1714]: time="2025-09-13T00:02:40.099999962Z" level=info msg="StopPodSandbox for \"8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd\" returns successfully" Sep 13 00:02:40.100480 containerd[1714]: time="2025-09-13T00:02:40.100419442Z" level=info msg="RemovePodSandbox for \"8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd\"" Sep 13 00:02:40.100514 containerd[1714]: time="2025-09-13T00:02:40.100479442Z" level=info msg="Forcibly stopping sandbox \"8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd\"" Sep 13 00:02:40.166035 containerd[1714]: 2025-09-13 00:02:40.133 [WARNING][6614] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-goldmane--7988f88666--9gfvn-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"43be8aa2-8f03-4619-8b2e-6d113822f84e", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"45c1e6b471ae2216ed87136b7531057f2a88105a16f2764e27ae24711a07975a", Pod:"goldmane-7988f88666-9gfvn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.38.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0c7f4ded509", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:40.166035 containerd[1714]: 2025-09-13 00:02:40.133 [INFO][6614] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" Sep 13 00:02:40.166035 containerd[1714]: 2025-09-13 00:02:40.133 [INFO][6614] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" iface="eth0" netns="" Sep 13 00:02:40.166035 containerd[1714]: 2025-09-13 00:02:40.133 [INFO][6614] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" Sep 13 00:02:40.166035 containerd[1714]: 2025-09-13 00:02:40.133 [INFO][6614] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" Sep 13 00:02:40.166035 containerd[1714]: 2025-09-13 00:02:40.152 [INFO][6621] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" HandleID="k8s-pod-network.8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" Workload="ci--4081.3.5--n--a13ccab244-k8s-goldmane--7988f88666--9gfvn-eth0" Sep 13 00:02:40.166035 containerd[1714]: 2025-09-13 00:02:40.152 [INFO][6621] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:40.166035 containerd[1714]: 2025-09-13 00:02:40.152 [INFO][6621] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:40.166035 containerd[1714]: 2025-09-13 00:02:40.161 [WARNING][6621] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" HandleID="k8s-pod-network.8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" Workload="ci--4081.3.5--n--a13ccab244-k8s-goldmane--7988f88666--9gfvn-eth0" Sep 13 00:02:40.166035 containerd[1714]: 2025-09-13 00:02:40.161 [INFO][6621] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" HandleID="k8s-pod-network.8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" Workload="ci--4081.3.5--n--a13ccab244-k8s-goldmane--7988f88666--9gfvn-eth0" Sep 13 00:02:40.166035 containerd[1714]: 2025-09-13 00:02:40.163 [INFO][6621] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:40.166035 containerd[1714]: 2025-09-13 00:02:40.164 [INFO][6614] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd" Sep 13 00:02:40.166495 containerd[1714]: time="2025-09-13T00:02:40.166033818Z" level=info msg="TearDown network for sandbox \"8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd\" successfully" Sep 13 00:02:40.177906 containerd[1714]: time="2025-09-13T00:02:40.177747039Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:02:40.177906 containerd[1714]: time="2025-09-13T00:02:40.177823839Z" level=info msg="RemovePodSandbox \"8b16589cee9644d11a53ec504a0b949ca17cab71e7560d229885d026373e87bd\" returns successfully" Sep 13 00:02:40.178404 containerd[1714]: time="2025-09-13T00:02:40.178379638Z" level=info msg="StopPodSandbox for \"cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809\"" Sep 13 00:02:40.244265 containerd[1714]: 2025-09-13 00:02:40.211 [WARNING][6635] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-whisker--758bd4bf57--4k4lt-eth0" Sep 13 00:02:40.244265 containerd[1714]: 2025-09-13 00:02:40.211 [INFO][6635] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" Sep 13 00:02:40.244265 containerd[1714]: 2025-09-13 00:02:40.211 [INFO][6635] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" iface="eth0" netns="" Sep 13 00:02:40.244265 containerd[1714]: 2025-09-13 00:02:40.211 [INFO][6635] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" Sep 13 00:02:40.244265 containerd[1714]: 2025-09-13 00:02:40.211 [INFO][6635] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" Sep 13 00:02:40.244265 containerd[1714]: 2025-09-13 00:02:40.231 [INFO][6642] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" HandleID="k8s-pod-network.cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" Workload="ci--4081.3.5--n--a13ccab244-k8s-whisker--758bd4bf57--4k4lt-eth0" Sep 13 00:02:40.244265 containerd[1714]: 2025-09-13 00:02:40.231 [INFO][6642] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:40.244265 containerd[1714]: 2025-09-13 00:02:40.231 [INFO][6642] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:40.244265 containerd[1714]: 2025-09-13 00:02:40.239 [WARNING][6642] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" HandleID="k8s-pod-network.cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" Workload="ci--4081.3.5--n--a13ccab244-k8s-whisker--758bd4bf57--4k4lt-eth0" Sep 13 00:02:40.244265 containerd[1714]: 2025-09-13 00:02:40.239 [INFO][6642] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" HandleID="k8s-pod-network.cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" Workload="ci--4081.3.5--n--a13ccab244-k8s-whisker--758bd4bf57--4k4lt-eth0" Sep 13 00:02:40.244265 containerd[1714]: 2025-09-13 00:02:40.241 [INFO][6642] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:40.244265 containerd[1714]: 2025-09-13 00:02:40.242 [INFO][6635] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" Sep 13 00:02:40.244727 containerd[1714]: time="2025-09-13T00:02:40.244314254Z" level=info msg="TearDown network for sandbox \"cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809\" successfully" Sep 13 00:02:40.244727 containerd[1714]: time="2025-09-13T00:02:40.244350814Z" level=info msg="StopPodSandbox for \"cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809\" returns successfully" Sep 13 00:02:40.244929 containerd[1714]: time="2025-09-13T00:02:40.244903213Z" level=info msg="RemovePodSandbox for \"cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809\"" Sep 13 00:02:40.244973 containerd[1714]: time="2025-09-13T00:02:40.244935733Z" level=info msg="Forcibly stopping sandbox \"cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809\"" Sep 13 00:02:40.312396 containerd[1714]: 2025-09-13 00:02:40.277 [WARNING][6656] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-whisker--758bd4bf57--4k4lt-eth0" Sep 13 00:02:40.312396 containerd[1714]: 2025-09-13 00:02:40.278 [INFO][6656] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" Sep 13 00:02:40.312396 containerd[1714]: 2025-09-13 00:02:40.278 [INFO][6656] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" iface="eth0" netns="" Sep 13 00:02:40.312396 containerd[1714]: 2025-09-13 00:02:40.278 [INFO][6656] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" Sep 13 00:02:40.312396 containerd[1714]: 2025-09-13 00:02:40.278 [INFO][6656] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" Sep 13 00:02:40.312396 containerd[1714]: 2025-09-13 00:02:40.298 [INFO][6663] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" HandleID="k8s-pod-network.cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" Workload="ci--4081.3.5--n--a13ccab244-k8s-whisker--758bd4bf57--4k4lt-eth0" Sep 13 00:02:40.312396 containerd[1714]: 2025-09-13 00:02:40.298 [INFO][6663] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:40.312396 containerd[1714]: 2025-09-13 00:02:40.298 [INFO][6663] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:40.312396 containerd[1714]: 2025-09-13 00:02:40.307 [WARNING][6663] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" HandleID="k8s-pod-network.cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" Workload="ci--4081.3.5--n--a13ccab244-k8s-whisker--758bd4bf57--4k4lt-eth0" Sep 13 00:02:40.312396 containerd[1714]: 2025-09-13 00:02:40.307 [INFO][6663] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" HandleID="k8s-pod-network.cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" Workload="ci--4081.3.5--n--a13ccab244-k8s-whisker--758bd4bf57--4k4lt-eth0" Sep 13 00:02:40.312396 containerd[1714]: 2025-09-13 00:02:40.309 [INFO][6663] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:40.312396 containerd[1714]: 2025-09-13 00:02:40.310 [INFO][6656] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809" Sep 13 00:02:40.312758 containerd[1714]: time="2025-09-13T00:02:40.312449666Z" level=info msg="TearDown network for sandbox \"cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809\" successfully" Sep 13 00:02:40.324544 containerd[1714]: time="2025-09-13T00:02:40.324489967Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:02:40.324780 containerd[1714]: time="2025-09-13T00:02:40.324567007Z" level=info msg="RemovePodSandbox \"cfce1458f54e5a38a1625c6fdd4be53b662beee0e800a3a81b2a5054d487d809\" returns successfully" Sep 13 00:02:40.325077 containerd[1714]: time="2025-09-13T00:02:40.325047127Z" level=info msg="StopPodSandbox for \"9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b\"" Sep 13 00:02:40.392089 containerd[1714]: 2025-09-13 00:02:40.359 [WARNING][6677] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-csi--node--driver--xm8sj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a5fb5c00-ba33-4510-871e-6572e3bb79c8", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"0b0a843ae894241a6b7cbd8a2aa904f8210396c3a3cc5b725ab6204e4381f0c6", Pod:"csi-node-driver-xm8sj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.38.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibb28e7e77a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:40.392089 containerd[1714]: 2025-09-13 00:02:40.359 [INFO][6677] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" Sep 13 00:02:40.392089 containerd[1714]: 2025-09-13 00:02:40.359 [INFO][6677] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" iface="eth0" netns="" Sep 13 00:02:40.392089 containerd[1714]: 2025-09-13 00:02:40.359 [INFO][6677] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" Sep 13 00:02:40.392089 containerd[1714]: 2025-09-13 00:02:40.359 [INFO][6677] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" Sep 13 00:02:40.392089 containerd[1714]: 2025-09-13 00:02:40.378 [INFO][6684] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" HandleID="k8s-pod-network.9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" Workload="ci--4081.3.5--n--a13ccab244-k8s-csi--node--driver--xm8sj-eth0" Sep 13 00:02:40.392089 containerd[1714]: 2025-09-13 00:02:40.378 [INFO][6684] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:40.392089 containerd[1714]: 2025-09-13 00:02:40.378 [INFO][6684] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:40.392089 containerd[1714]: 2025-09-13 00:02:40.387 [WARNING][6684] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" HandleID="k8s-pod-network.9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" Workload="ci--4081.3.5--n--a13ccab244-k8s-csi--node--driver--xm8sj-eth0" Sep 13 00:02:40.392089 containerd[1714]: 2025-09-13 00:02:40.387 [INFO][6684] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" HandleID="k8s-pod-network.9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" Workload="ci--4081.3.5--n--a13ccab244-k8s-csi--node--driver--xm8sj-eth0" Sep 13 00:02:40.392089 containerd[1714]: 2025-09-13 00:02:40.389 [INFO][6684] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:40.392089 containerd[1714]: 2025-09-13 00:02:40.390 [INFO][6677] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" Sep 13 00:02:40.392863 containerd[1714]: time="2025-09-13T00:02:40.392144780Z" level=info msg="TearDown network for sandbox \"9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b\" successfully" Sep 13 00:02:40.392863 containerd[1714]: time="2025-09-13T00:02:40.392169740Z" level=info msg="StopPodSandbox for \"9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b\" returns successfully" Sep 13 00:02:40.392863 containerd[1714]: time="2025-09-13T00:02:40.392646980Z" level=info msg="RemovePodSandbox for \"9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b\"" Sep 13 00:02:40.392863 containerd[1714]: time="2025-09-13T00:02:40.392676300Z" level=info msg="Forcibly stopping sandbox \"9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b\"" Sep 13 00:02:40.462063 containerd[1714]: 2025-09-13 00:02:40.426 [WARNING][6698] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-csi--node--driver--xm8sj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a5fb5c00-ba33-4510-871e-6572e3bb79c8", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 2, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"0b0a843ae894241a6b7cbd8a2aa904f8210396c3a3cc5b725ab6204e4381f0c6", Pod:"csi-node-driver-xm8sj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.38.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibb28e7e77a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:40.462063 containerd[1714]: 2025-09-13 00:02:40.426 [INFO][6698] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" Sep 13 00:02:40.462063 containerd[1714]: 2025-09-13 00:02:40.426 [INFO][6698] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" iface="eth0" netns="" Sep 13 00:02:40.462063 containerd[1714]: 2025-09-13 00:02:40.426 [INFO][6698] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" Sep 13 00:02:40.462063 containerd[1714]: 2025-09-13 00:02:40.426 [INFO][6698] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" Sep 13 00:02:40.462063 containerd[1714]: 2025-09-13 00:02:40.447 [INFO][6705] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" HandleID="k8s-pod-network.9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" Workload="ci--4081.3.5--n--a13ccab244-k8s-csi--node--driver--xm8sj-eth0" Sep 13 00:02:40.462063 containerd[1714]: 2025-09-13 00:02:40.447 [INFO][6705] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:40.462063 containerd[1714]: 2025-09-13 00:02:40.447 [INFO][6705] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:40.462063 containerd[1714]: 2025-09-13 00:02:40.457 [WARNING][6705] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" HandleID="k8s-pod-network.9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" Workload="ci--4081.3.5--n--a13ccab244-k8s-csi--node--driver--xm8sj-eth0" Sep 13 00:02:40.462063 containerd[1714]: 2025-09-13 00:02:40.457 [INFO][6705] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" HandleID="k8s-pod-network.9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" Workload="ci--4081.3.5--n--a13ccab244-k8s-csi--node--driver--xm8sj-eth0" Sep 13 00:02:40.462063 containerd[1714]: 2025-09-13 00:02:40.458 [INFO][6705] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:40.462063 containerd[1714]: 2025-09-13 00:02:40.459 [INFO][6698] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b" Sep 13 00:02:40.462063 containerd[1714]: time="2025-09-13T00:02:40.461674471Z" level=info msg="TearDown network for sandbox \"9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b\" successfully" Sep 13 00:02:40.474160 containerd[1714]: time="2025-09-13T00:02:40.474109251Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:02:40.474486 containerd[1714]: time="2025-09-13T00:02:40.474197611Z" level=info msg="RemovePodSandbox \"9395a518ed208eb790340f2cfb83406f745f397a692a7ac472d7d8191d8d440b\" returns successfully" Sep 13 00:02:40.475084 containerd[1714]: time="2025-09-13T00:02:40.474634970Z" level=info msg="StopPodSandbox for \"1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c\"" Sep 13 00:02:40.547260 containerd[1714]: 2025-09-13 00:02:40.513 [WARNING][6723] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wfk44-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b2871a94-de4f-43dc-9c64-d0d7c69fe615", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 1, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"dfbaf709bd2a86801e45083e16e9840b19339e55555063e1119bbcc555d8e333", Pod:"coredns-7c65d6cfc9-wfk44", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic15881fce0c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:40.547260 containerd[1714]: 2025-09-13 00:02:40.513 [INFO][6723] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" Sep 13 00:02:40.547260 containerd[1714]: 2025-09-13 00:02:40.513 [INFO][6723] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" iface="eth0" netns="" Sep 13 00:02:40.547260 containerd[1714]: 2025-09-13 00:02:40.513 [INFO][6723] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" Sep 13 00:02:40.547260 containerd[1714]: 2025-09-13 00:02:40.513 [INFO][6723] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" Sep 13 00:02:40.547260 containerd[1714]: 2025-09-13 00:02:40.533 [INFO][6732] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" HandleID="k8s-pod-network.1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" Workload="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wfk44-eth0" Sep 13 00:02:40.547260 containerd[1714]: 2025-09-13 00:02:40.533 [INFO][6732] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:40.547260 containerd[1714]: 2025-09-13 00:02:40.533 [INFO][6732] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:40.547260 containerd[1714]: 2025-09-13 00:02:40.542 [WARNING][6732] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" HandleID="k8s-pod-network.1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" Workload="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wfk44-eth0" Sep 13 00:02:40.547260 containerd[1714]: 2025-09-13 00:02:40.542 [INFO][6732] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" HandleID="k8s-pod-network.1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" Workload="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wfk44-eth0" Sep 13 00:02:40.547260 containerd[1714]: 2025-09-13 00:02:40.543 [INFO][6732] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:40.547260 containerd[1714]: 2025-09-13 00:02:40.545 [INFO][6723] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" Sep 13 00:02:40.547260 containerd[1714]: time="2025-09-13T00:02:40.547144895Z" level=info msg="TearDown network for sandbox \"1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c\" successfully" Sep 13 00:02:40.547260 containerd[1714]: time="2025-09-13T00:02:40.547170135Z" level=info msg="StopPodSandbox for \"1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c\" returns successfully" Sep 13 00:02:40.548174 containerd[1714]: time="2025-09-13T00:02:40.548100134Z" level=info msg="RemovePodSandbox for \"1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c\"" Sep 13 00:02:40.548174 containerd[1714]: time="2025-09-13T00:02:40.548134814Z" level=info msg="Forcibly stopping sandbox \"1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c\"" Sep 13 00:02:40.616979 containerd[1714]: 2025-09-13 00:02:40.583 [WARNING][6746] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wfk44-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b2871a94-de4f-43dc-9c64-d0d7c69fe615", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 1, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-a13ccab244", ContainerID:"dfbaf709bd2a86801e45083e16e9840b19339e55555063e1119bbcc555d8e333", Pod:"coredns-7c65d6cfc9-wfk44", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic15881fce0c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:40.616979 containerd[1714]: 2025-09-13 00:02:40.583 [INFO][6746] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" Sep 13 00:02:40.616979 containerd[1714]: 2025-09-13 00:02:40.583 [INFO][6746] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" iface="eth0" netns="" Sep 13 00:02:40.616979 containerd[1714]: 2025-09-13 00:02:40.583 [INFO][6746] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" Sep 13 00:02:40.616979 containerd[1714]: 2025-09-13 00:02:40.583 [INFO][6746] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" Sep 13 00:02:40.616979 containerd[1714]: 2025-09-13 00:02:40.602 [INFO][6754] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" HandleID="k8s-pod-network.1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" Workload="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wfk44-eth0" Sep 13 00:02:40.616979 containerd[1714]: 2025-09-13 00:02:40.602 [INFO][6754] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:40.616979 containerd[1714]: 2025-09-13 00:02:40.602 [INFO][6754] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:40.616979 containerd[1714]: 2025-09-13 00:02:40.612 [WARNING][6754] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" HandleID="k8s-pod-network.1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" Workload="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wfk44-eth0" Sep 13 00:02:40.616979 containerd[1714]: 2025-09-13 00:02:40.612 [INFO][6754] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" HandleID="k8s-pod-network.1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" Workload="ci--4081.3.5--n--a13ccab244-k8s-coredns--7c65d6cfc9--wfk44-eth0" Sep 13 00:02:40.616979 containerd[1714]: 2025-09-13 00:02:40.613 [INFO][6754] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:40.616979 containerd[1714]: 2025-09-13 00:02:40.615 [INFO][6746] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c" Sep 13 00:02:40.617407 containerd[1714]: time="2025-09-13T00:02:40.617034625Z" level=info msg="TearDown network for sandbox \"1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c\" successfully" Sep 13 00:02:40.626372 containerd[1714]: time="2025-09-13T00:02:40.626311771Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:02:40.626810 containerd[1714]: time="2025-09-13T00:02:40.626388851Z" level=info msg="RemovePodSandbox \"1c5c23e66664020f926d562547ab4d70df51570f45e4244c6074804886cf538c\" returns successfully" Sep 13 00:02:40.627313 containerd[1714]: time="2025-09-13T00:02:40.627043050Z" level=info msg="StopPodSandbox for \"807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60\"" Sep 13 00:02:40.702881 containerd[1714]: 2025-09-13 00:02:40.660 [WARNING][6768] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" Sep 13 00:02:40.702881 containerd[1714]: 2025-09-13 00:02:40.660 [INFO][6768] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" Sep 13 00:02:40.702881 containerd[1714]: 2025-09-13 00:02:40.660 [INFO][6768] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" iface="eth0" netns="" Sep 13 00:02:40.702881 containerd[1714]: 2025-09-13 00:02:40.660 [INFO][6768] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" Sep 13 00:02:40.702881 containerd[1714]: 2025-09-13 00:02:40.660 [INFO][6768] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" Sep 13 00:02:40.702881 containerd[1714]: 2025-09-13 00:02:40.687 [INFO][6775] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" HandleID="k8s-pod-network.807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" Sep 13 00:02:40.702881 containerd[1714]: 2025-09-13 00:02:40.687 [INFO][6775] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:40.702881 containerd[1714]: 2025-09-13 00:02:40.687 [INFO][6775] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:40.702881 containerd[1714]: 2025-09-13 00:02:40.698 [WARNING][6775] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" HandleID="k8s-pod-network.807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" Sep 13 00:02:40.702881 containerd[1714]: 2025-09-13 00:02:40.698 [INFO][6775] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" HandleID="k8s-pod-network.807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" Sep 13 00:02:40.702881 containerd[1714]: 2025-09-13 00:02:40.699 [INFO][6775] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:40.702881 containerd[1714]: 2025-09-13 00:02:40.701 [INFO][6768] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" Sep 13 00:02:40.703674 containerd[1714]: time="2025-09-13T00:02:40.703336054Z" level=info msg="TearDown network for sandbox \"807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60\" successfully" Sep 13 00:02:40.703674 containerd[1714]: time="2025-09-13T00:02:40.703368454Z" level=info msg="StopPodSandbox for \"807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60\" returns successfully" Sep 13 00:02:40.704258 containerd[1714]: time="2025-09-13T00:02:40.704227333Z" level=info msg="RemovePodSandbox for \"807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60\"" Sep 13 00:02:40.704341 containerd[1714]: time="2025-09-13T00:02:40.704264053Z" level=info msg="Forcibly stopping sandbox \"807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60\"" Sep 13 00:02:40.769115 containerd[1714]: 2025-09-13 00:02:40.735 [WARNING][6789] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" Sep 13 00:02:40.769115 containerd[1714]: 2025-09-13 00:02:40.736 [INFO][6789] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" Sep 13 00:02:40.769115 containerd[1714]: 2025-09-13 00:02:40.736 [INFO][6789] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" iface="eth0" netns="" Sep 13 00:02:40.769115 containerd[1714]: 2025-09-13 00:02:40.736 [INFO][6789] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" Sep 13 00:02:40.769115 containerd[1714]: 2025-09-13 00:02:40.736 [INFO][6789] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" Sep 13 00:02:40.769115 containerd[1714]: 2025-09-13 00:02:40.754 [INFO][6796] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" HandleID="k8s-pod-network.807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" Sep 13 00:02:40.769115 containerd[1714]: 2025-09-13 00:02:40.754 [INFO][6796] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:40.769115 containerd[1714]: 2025-09-13 00:02:40.754 [INFO][6796] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:40.769115 containerd[1714]: 2025-09-13 00:02:40.762 [WARNING][6796] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" HandleID="k8s-pod-network.807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" Sep 13 00:02:40.769115 containerd[1714]: 2025-09-13 00:02:40.762 [INFO][6796] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" HandleID="k8s-pod-network.807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" Sep 13 00:02:40.769115 containerd[1714]: 2025-09-13 00:02:40.764 [INFO][6796] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:40.769115 containerd[1714]: 2025-09-13 00:02:40.765 [INFO][6789] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60" Sep 13 00:02:40.769115 containerd[1714]: time="2025-09-13T00:02:40.768681595Z" level=info msg="TearDown network for sandbox \"807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60\" successfully" Sep 13 00:02:40.779863 containerd[1714]: time="2025-09-13T00:02:40.779819938Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:02:40.780118 containerd[1714]: time="2025-09-13T00:02:40.780098018Z" level=info msg="RemovePodSandbox \"807ca00e6df247995bf8fc66c092154982bf3c3d5167b2143ead362087ddff60\" returns successfully" Sep 13 00:02:40.780458 containerd[1714]: time="2025-09-13T00:02:40.780432257Z" level=info msg="StopPodSandbox for \"69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec\"" Sep 13 00:02:40.847122 containerd[1714]: 2025-09-13 00:02:40.814 [WARNING][6810] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" Sep 13 00:02:40.847122 containerd[1714]: 2025-09-13 00:02:40.814 [INFO][6810] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" Sep 13 00:02:40.847122 containerd[1714]: 2025-09-13 00:02:40.814 [INFO][6810] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" iface="eth0" netns="" Sep 13 00:02:40.847122 containerd[1714]: 2025-09-13 00:02:40.814 [INFO][6810] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" Sep 13 00:02:40.847122 containerd[1714]: 2025-09-13 00:02:40.814 [INFO][6810] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" Sep 13 00:02:40.847122 containerd[1714]: 2025-09-13 00:02:40.833 [INFO][6817] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" HandleID="k8s-pod-network.69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" Sep 13 00:02:40.847122 containerd[1714]: 2025-09-13 00:02:40.833 [INFO][6817] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:40.847122 containerd[1714]: 2025-09-13 00:02:40.833 [INFO][6817] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:40.847122 containerd[1714]: 2025-09-13 00:02:40.841 [WARNING][6817] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" HandleID="k8s-pod-network.69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" Sep 13 00:02:40.847122 containerd[1714]: 2025-09-13 00:02:40.841 [INFO][6817] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" HandleID="k8s-pod-network.69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" Sep 13 00:02:40.847122 containerd[1714]: 2025-09-13 00:02:40.843 [INFO][6817] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:40.847122 containerd[1714]: 2025-09-13 00:02:40.844 [INFO][6810] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" Sep 13 00:02:40.847122 containerd[1714]: time="2025-09-13T00:02:40.846829717Z" level=info msg="TearDown network for sandbox \"69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec\" successfully" Sep 13 00:02:40.847122 containerd[1714]: time="2025-09-13T00:02:40.846855957Z" level=info msg="StopPodSandbox for \"69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec\" returns successfully" Sep 13 00:02:40.847817 containerd[1714]: time="2025-09-13T00:02:40.847393236Z" level=info msg="RemovePodSandbox for \"69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec\"" Sep 13 00:02:40.847817 containerd[1714]: time="2025-09-13T00:02:40.847422756Z" level=info msg="Forcibly stopping sandbox \"69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec\"" Sep 13 00:02:40.973349 containerd[1714]: 2025-09-13 00:02:40.921 [WARNING][6831] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" Sep 13 00:02:40.973349 containerd[1714]: 2025-09-13 00:02:40.922 [INFO][6831] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" Sep 13 00:02:40.973349 containerd[1714]: 2025-09-13 00:02:40.922 [INFO][6831] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" iface="eth0" netns="" Sep 13 00:02:40.973349 containerd[1714]: 2025-09-13 00:02:40.922 [INFO][6831] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" Sep 13 00:02:40.973349 containerd[1714]: 2025-09-13 00:02:40.922 [INFO][6831] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" Sep 13 00:02:40.973349 containerd[1714]: 2025-09-13 00:02:40.955 [INFO][6872] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" HandleID="k8s-pod-network.69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" Sep 13 00:02:40.973349 containerd[1714]: 2025-09-13 00:02:40.956 [INFO][6872] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:40.973349 containerd[1714]: 2025-09-13 00:02:40.956 [INFO][6872] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:40.973349 containerd[1714]: 2025-09-13 00:02:40.967 [WARNING][6872] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" HandleID="k8s-pod-network.69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" Sep 13 00:02:40.973349 containerd[1714]: 2025-09-13 00:02:40.967 [INFO][6872] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" HandleID="k8s-pod-network.69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--q6cd7-eth0" Sep 13 00:02:40.973349 containerd[1714]: 2025-09-13 00:02:40.969 [INFO][6872] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:40.973349 containerd[1714]: 2025-09-13 00:02:40.970 [INFO][6831] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec" Sep 13 00:02:40.973349 containerd[1714]: time="2025-09-13T00:02:40.973269365Z" level=info msg="TearDown network for sandbox \"69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec\" successfully" Sep 13 00:02:40.993411 containerd[1714]: time="2025-09-13T00:02:40.993357695Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:02:40.993564 containerd[1714]: time="2025-09-13T00:02:40.993441414Z" level=info msg="RemovePodSandbox \"69857c107a218561075b0242a55026b5f0b1c797abc2f21530d49275b03ee7ec\" returns successfully" Sep 13 00:02:48.545162 kubelet[3139]: I0913 00:02:48.545121 3139 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:02:58.092185 kubelet[3139]: I0913 00:02:58.091927 3139 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:02:58.253110 containerd[1714]: time="2025-09-13T00:02:58.252705039Z" level=info msg="StopContainer for \"a48fa92ed2d33a6f68b3a8874798c2b6b683a8fe962927fc6886164fb4375d7b\" with timeout 30 (s)" Sep 13 00:02:58.254088 containerd[1714]: time="2025-09-13T00:02:58.253854997Z" level=info msg="Stop container \"a48fa92ed2d33a6f68b3a8874798c2b6b683a8fe962927fc6886164fb4375d7b\" with signal terminated" Sep 13 00:02:58.324983 systemd[1]: cri-containerd-a48fa92ed2d33a6f68b3a8874798c2b6b683a8fe962927fc6886164fb4375d7b.scope: Deactivated successfully. Sep 13 00:02:58.326050 systemd[1]: cri-containerd-a48fa92ed2d33a6f68b3a8874798c2b6b683a8fe962927fc6886164fb4375d7b.scope: Consumed 1.698s CPU time. Sep 13 00:02:58.364109 containerd[1714]: time="2025-09-13T00:02:58.360306695Z" level=info msg="shim disconnected" id=a48fa92ed2d33a6f68b3a8874798c2b6b683a8fe962927fc6886164fb4375d7b namespace=k8s.io Sep 13 00:02:58.364109 containerd[1714]: time="2025-09-13T00:02:58.360364415Z" level=warning msg="cleaning up after shim disconnected" id=a48fa92ed2d33a6f68b3a8874798c2b6b683a8fe962927fc6886164fb4375d7b namespace=k8s.io Sep 13 00:02:58.364109 containerd[1714]: time="2025-09-13T00:02:58.360375455Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:02:58.363580 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a48fa92ed2d33a6f68b3a8874798c2b6b683a8fe962927fc6886164fb4375d7b-rootfs.mount: Deactivated successfully. Sep 13 00:02:58.417860 containerd[1714]: time="2025-09-13T00:02:58.417254619Z" level=info msg="StopContainer for \"a48fa92ed2d33a6f68b3a8874798c2b6b683a8fe962927fc6886164fb4375d7b\" returns successfully" Sep 13 00:02:58.418554 containerd[1714]: time="2025-09-13T00:02:58.418520177Z" level=info msg="StopPodSandbox for \"bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40\"" Sep 13 00:02:58.418811 containerd[1714]: time="2025-09-13T00:02:58.418792177Z" level=info msg="Container to stop \"a48fa92ed2d33a6f68b3a8874798c2b6b683a8fe962927fc6886164fb4375d7b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:02:58.424449 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40-shm.mount: Deactivated successfully. Sep 13 00:02:58.432440 systemd[1]: cri-containerd-bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40.scope: Deactivated successfully. Sep 13 00:02:58.477957 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40-rootfs.mount: Deactivated successfully. Sep 13 00:02:58.480802 containerd[1714]: time="2025-09-13T00:02:58.480737694Z" level=info msg="shim disconnected" id=bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40 namespace=k8s.io Sep 13 00:02:58.481107 containerd[1714]: time="2025-09-13T00:02:58.480865213Z" level=warning msg="cleaning up after shim disconnected" id=bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40 namespace=k8s.io Sep 13 00:02:58.481107 containerd[1714]: time="2025-09-13T00:02:58.480877693Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:02:58.594755 systemd-networkd[1567]: cali31c167188ec: Link DOWN Sep 13 00:02:58.594766 systemd-networkd[1567]: cali31c167188ec: Lost carrier Sep 13 00:02:58.710736 containerd[1714]: 2025-09-13 00:02:58.592 [INFO][6998] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" Sep 13 00:02:58.710736 containerd[1714]: 2025-09-13 00:02:58.592 [INFO][6998] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" iface="eth0" netns="/var/run/netns/cni-b9cb6d7b-6be5-93a4-cb9f-c9d28f14e39a" Sep 13 00:02:58.710736 containerd[1714]: 2025-09-13 00:02:58.592 [INFO][6998] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" iface="eth0" netns="/var/run/netns/cni-b9cb6d7b-6be5-93a4-cb9f-c9d28f14e39a" Sep 13 00:02:58.710736 containerd[1714]: 2025-09-13 00:02:58.606 [INFO][6998] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" after=13.419582ms iface="eth0" netns="/var/run/netns/cni-b9cb6d7b-6be5-93a4-cb9f-c9d28f14e39a" Sep 13 00:02:58.710736 containerd[1714]: 2025-09-13 00:02:58.606 [INFO][6998] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" Sep 13 00:02:58.710736 containerd[1714]: 2025-09-13 00:02:58.606 [INFO][6998] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" Sep 13 00:02:58.710736 containerd[1714]: 2025-09-13 00:02:58.643 [INFO][7009] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" HandleID="k8s-pod-network.bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0" Sep 13 00:02:58.710736 containerd[1714]: 2025-09-13 00:02:58.643 [INFO][7009] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:58.710736 containerd[1714]: 2025-09-13 00:02:58.643 [INFO][7009] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:58.710736 containerd[1714]: 2025-09-13 00:02:58.706 [INFO][7009] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" HandleID="k8s-pod-network.bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0" Sep 13 00:02:58.710736 containerd[1714]: 2025-09-13 00:02:58.706 [INFO][7009] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" HandleID="k8s-pod-network.bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0" Sep 13 00:02:58.710736 containerd[1714]: 2025-09-13 00:02:58.708 [INFO][7009] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:58.710736 containerd[1714]: 2025-09-13 00:02:58.709 [INFO][6998] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" Sep 13 00:02:58.714198 containerd[1714]: time="2025-09-13T00:02:58.714155141Z" level=info msg="TearDown network for sandbox \"bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40\" successfully" Sep 13 00:02:58.714198 containerd[1714]: time="2025-09-13T00:02:58.714193621Z" level=info msg="StopPodSandbox for \"bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40\" returns successfully" Sep 13 00:02:58.714888 systemd[1]: run-netns-cni\x2db9cb6d7b\x2d6be5\x2d93a4\x2dcb9f\x2dc9d28f14e39a.mount: Deactivated successfully. Sep 13 00:02:58.815051 kubelet[3139]: I0913 00:02:58.813707 3139 scope.go:117] "RemoveContainer" containerID="a48fa92ed2d33a6f68b3a8874798c2b6b683a8fe962927fc6886164fb4375d7b" Sep 13 00:02:58.817414 containerd[1714]: time="2025-09-13T00:02:58.817071644Z" level=info msg="RemoveContainer for \"a48fa92ed2d33a6f68b3a8874798c2b6b683a8fe962927fc6886164fb4375d7b\"" Sep 13 00:02:58.827689 containerd[1714]: time="2025-09-13T00:02:58.826840311Z" level=info msg="RemoveContainer for \"a48fa92ed2d33a6f68b3a8874798c2b6b683a8fe962927fc6886164fb4375d7b\" returns successfully" Sep 13 00:02:58.828258 kubelet[3139]: I0913 00:02:58.827935 3139 scope.go:117] "RemoveContainer" containerID="a48fa92ed2d33a6f68b3a8874798c2b6b683a8fe962927fc6886164fb4375d7b" Sep 13 00:02:58.828342 containerd[1714]: time="2025-09-13T00:02:58.828206229Z" level=error msg="ContainerStatus for \"a48fa92ed2d33a6f68b3a8874798c2b6b683a8fe962927fc6886164fb4375d7b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a48fa92ed2d33a6f68b3a8874798c2b6b683a8fe962927fc6886164fb4375d7b\": not found" Sep 13 00:02:58.828634 kubelet[3139]: E0913 00:02:58.828572 3139 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a48fa92ed2d33a6f68b3a8874798c2b6b683a8fe962927fc6886164fb4375d7b\": not found" containerID="a48fa92ed2d33a6f68b3a8874798c2b6b683a8fe962927fc6886164fb4375d7b" Sep 13 00:02:58.828634 kubelet[3139]: I0913 00:02:58.828600 3139 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a48fa92ed2d33a6f68b3a8874798c2b6b683a8fe962927fc6886164fb4375d7b"} err="failed to get container status \"a48fa92ed2d33a6f68b3a8874798c2b6b683a8fe962927fc6886164fb4375d7b\": rpc error: code = NotFound desc = an error occurred when try to find container \"a48fa92ed2d33a6f68b3a8874798c2b6b683a8fe962927fc6886164fb4375d7b\": not found" Sep 13 00:02:58.830254 kubelet[3139]: I0913 00:02:58.829698 3139 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c293651a-dfe1-4de8-a4dd-be90531f8a49-calico-apiserver-certs\") pod \"c293651a-dfe1-4de8-a4dd-be90531f8a49\" (UID: \"c293651a-dfe1-4de8-a4dd-be90531f8a49\") " Sep 13 00:02:58.830254 kubelet[3139]: I0913 00:02:58.829750 3139 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8rjj\" (UniqueName: \"kubernetes.io/projected/c293651a-dfe1-4de8-a4dd-be90531f8a49-kube-api-access-v8rjj\") pod \"c293651a-dfe1-4de8-a4dd-be90531f8a49\" (UID: \"c293651a-dfe1-4de8-a4dd-be90531f8a49\") " Sep 13 00:02:58.835129 systemd[1]: var-lib-kubelet-pods-c293651a\x2ddfe1\x2d4de8\x2da4dd\x2dbe90531f8a49-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv8rjj.mount: Deactivated successfully. Sep 13 00:02:58.836346 kubelet[3139]: I0913 00:02:58.836311 3139 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c293651a-dfe1-4de8-a4dd-be90531f8a49-kube-api-access-v8rjj" (OuterVolumeSpecName: "kube-api-access-v8rjj") pod "c293651a-dfe1-4de8-a4dd-be90531f8a49" (UID: "c293651a-dfe1-4de8-a4dd-be90531f8a49"). InnerVolumeSpecName "kube-api-access-v8rjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:02:58.839468 kubelet[3139]: I0913 00:02:58.839427 3139 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c293651a-dfe1-4de8-a4dd-be90531f8a49-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "c293651a-dfe1-4de8-a4dd-be90531f8a49" (UID: "c293651a-dfe1-4de8-a4dd-be90531f8a49"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:02:58.930923 kubelet[3139]: I0913 00:02:58.930882 3139 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c293651a-dfe1-4de8-a4dd-be90531f8a49-calico-apiserver-certs\") on node \"ci-4081.3.5-n-a13ccab244\" DevicePath \"\"" Sep 13 00:02:58.932098 kubelet[3139]: I0913 00:02:58.932069 3139 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8rjj\" (UniqueName: \"kubernetes.io/projected/c293651a-dfe1-4de8-a4dd-be90531f8a49-kube-api-access-v8rjj\") on node \"ci-4081.3.5-n-a13ccab244\" DevicePath \"\"" Sep 13 00:02:59.127493 systemd[1]: Removed slice kubepods-besteffort-podc293651a_dfe1_4de8_a4dd_be90531f8a49.slice - libcontainer container kubepods-besteffort-podc293651a_dfe1_4de8_a4dd_be90531f8a49.slice. Sep 13 00:02:59.129184 systemd[1]: kubepods-besteffort-podc293651a_dfe1_4de8_a4dd_be90531f8a49.slice: Consumed 1.716s CPU time. Sep 13 00:02:59.332406 kubelet[3139]: I0913 00:02:59.331976 3139 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c293651a-dfe1-4de8-a4dd-be90531f8a49" path="/var/lib/kubelet/pods/c293651a-dfe1-4de8-a4dd-be90531f8a49/volumes" Sep 13 00:02:59.362485 systemd[1]: var-lib-kubelet-pods-c293651a\x2ddfe1\x2d4de8\x2da4dd\x2dbe90531f8a49-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 13 00:03:29.593658 systemd[1]: Started sshd@7-10.200.20.14:22-10.200.16.10:48664.service - OpenSSH per-connection server daemon (10.200.16.10:48664). Sep 13 00:03:30.016005 sshd[7122]: Accepted publickey for core from 10.200.16.10 port 48664 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:03:30.020180 sshd[7122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:03:30.025801 systemd-logind[1688]: New session 10 of user core. Sep 13 00:03:30.032201 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:03:30.408576 sshd[7122]: pam_unix(sshd:session): session closed for user core Sep 13 00:03:30.415077 systemd[1]: sshd@7-10.200.20.14:22-10.200.16.10:48664.service: Deactivated successfully. Sep 13 00:03:30.418391 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:03:30.422154 systemd-logind[1688]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:03:30.423318 systemd-logind[1688]: Removed session 10. Sep 13 00:03:35.494344 systemd[1]: Started sshd@8-10.200.20.14:22-10.200.16.10:42082.service - OpenSSH per-connection server daemon (10.200.16.10:42082). Sep 13 00:03:35.921196 sshd[7156]: Accepted publickey for core from 10.200.16.10 port 42082 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:03:35.923614 sshd[7156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:03:35.930213 systemd-logind[1688]: New session 11 of user core. Sep 13 00:03:35.934363 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:03:36.357638 sshd[7156]: pam_unix(sshd:session): session closed for user core Sep 13 00:03:36.361264 systemd[1]: sshd@8-10.200.20.14:22-10.200.16.10:42082.service: Deactivated successfully. Sep 13 00:03:36.363082 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:03:36.363725 systemd-logind[1688]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:03:36.364782 systemd-logind[1688]: Removed session 11. Sep 13 00:03:40.876652 systemd[1]: run-containerd-runc-k8s.io-33bce7e2b7036929ffd5e28e5eb9932cbf460c777fb0ccea717f039b64271a8e-runc.54Q3lW.mount: Deactivated successfully. Sep 13 00:03:40.997556 containerd[1714]: time="2025-09-13T00:03:40.997351969Z" level=info msg="StopPodSandbox for \"bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40\"" Sep 13 00:03:41.061143 containerd[1714]: 2025-09-13 00:03:41.030 [WARNING][7219] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0" Sep 13 00:03:41.061143 containerd[1714]: 2025-09-13 00:03:41.030 [INFO][7219] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" Sep 13 00:03:41.061143 containerd[1714]: 2025-09-13 00:03:41.030 [INFO][7219] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" iface="eth0" netns="" Sep 13 00:03:41.061143 containerd[1714]: 2025-09-13 00:03:41.030 [INFO][7219] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" Sep 13 00:03:41.061143 containerd[1714]: 2025-09-13 00:03:41.030 [INFO][7219] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" Sep 13 00:03:41.061143 containerd[1714]: 2025-09-13 00:03:41.048 [INFO][7226] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" HandleID="k8s-pod-network.bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0" Sep 13 00:03:41.061143 containerd[1714]: 2025-09-13 00:03:41.048 [INFO][7226] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:41.061143 containerd[1714]: 2025-09-13 00:03:41.048 [INFO][7226] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:41.061143 containerd[1714]: 2025-09-13 00:03:41.056 [WARNING][7226] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" HandleID="k8s-pod-network.bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0" Sep 13 00:03:41.061143 containerd[1714]: 2025-09-13 00:03:41.056 [INFO][7226] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" HandleID="k8s-pod-network.bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0" Sep 13 00:03:41.061143 containerd[1714]: 2025-09-13 00:03:41.058 [INFO][7226] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:41.061143 containerd[1714]: 2025-09-13 00:03:41.059 [INFO][7219] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" Sep 13 00:03:41.061516 containerd[1714]: time="2025-09-13T00:03:41.061195038Z" level=info msg="TearDown network for sandbox \"bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40\" successfully" Sep 13 00:03:41.061516 containerd[1714]: time="2025-09-13T00:03:41.061223798Z" level=info msg="StopPodSandbox for \"bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40\" returns successfully" Sep 13 00:03:41.062012 containerd[1714]: time="2025-09-13T00:03:41.061721277Z" level=info msg="RemovePodSandbox for \"bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40\"" Sep 13 00:03:41.062012 containerd[1714]: time="2025-09-13T00:03:41.061754077Z" level=info msg="Forcibly stopping sandbox \"bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40\"" Sep 13 00:03:41.125346 containerd[1714]: 2025-09-13 00:03:41.093 [WARNING][7240] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" WorkloadEndpoint="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0" Sep 13 00:03:41.125346 containerd[1714]: 2025-09-13 00:03:41.093 [INFO][7240] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" Sep 13 00:03:41.125346 containerd[1714]: 2025-09-13 00:03:41.093 [INFO][7240] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" iface="eth0" netns="" Sep 13 00:03:41.125346 containerd[1714]: 2025-09-13 00:03:41.093 [INFO][7240] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" Sep 13 00:03:41.125346 containerd[1714]: 2025-09-13 00:03:41.093 [INFO][7240] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" Sep 13 00:03:41.125346 containerd[1714]: 2025-09-13 00:03:41.112 [INFO][7247] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" HandleID="k8s-pod-network.bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0" Sep 13 00:03:41.125346 containerd[1714]: 2025-09-13 00:03:41.112 [INFO][7247] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:41.125346 containerd[1714]: 2025-09-13 00:03:41.112 [INFO][7247] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:41.125346 containerd[1714]: 2025-09-13 00:03:41.120 [WARNING][7247] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" HandleID="k8s-pod-network.bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0" Sep 13 00:03:41.125346 containerd[1714]: 2025-09-13 00:03:41.120 [INFO][7247] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" HandleID="k8s-pod-network.bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" Workload="ci--4081.3.5--n--a13ccab244-k8s-calico--apiserver--5dd67bc454--rxrtp-eth0" Sep 13 00:03:41.125346 containerd[1714]: 2025-09-13 00:03:41.122 [INFO][7247] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:41.125346 containerd[1714]: 2025-09-13 00:03:41.124 [INFO][7240] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40" Sep 13 00:03:41.125698 containerd[1714]: time="2025-09-13T00:03:41.125493785Z" level=info msg="TearDown network for sandbox \"bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40\" successfully" Sep 13 00:03:41.152268 containerd[1714]: time="2025-09-13T00:03:41.152131907Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:03:41.152268 containerd[1714]: time="2025-09-13T00:03:41.152251946Z" level=info msg="RemovePodSandbox \"bd9dbe351b6b5df2822671ed801620e0f27fd3e3163fa08ceaedbc1fe727bd40\" returns successfully" Sep 13 00:03:41.431779 systemd[1]: Started sshd@9-10.200.20.14:22-10.200.16.10:53200.service - OpenSSH per-connection server daemon (10.200.16.10:53200). Sep 13 00:03:41.847816 sshd[7255]: Accepted publickey for core from 10.200.16.10 port 53200 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:03:41.849675 sshd[7255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:03:41.854216 systemd-logind[1688]: New session 12 of user core. Sep 13 00:03:41.862198 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:03:42.222906 sshd[7255]: pam_unix(sshd:session): session closed for user core Sep 13 00:03:42.226250 systemd[1]: sshd@9-10.200.20.14:22-10.200.16.10:53200.service: Deactivated successfully. Sep 13 00:03:42.228224 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:03:42.229072 systemd-logind[1688]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:03:42.230172 systemd-logind[1688]: Removed session 12. Sep 13 00:03:47.304117 systemd[1]: Started sshd@10-10.200.20.14:22-10.200.16.10:53204.service - OpenSSH per-connection server daemon (10.200.16.10:53204). Sep 13 00:03:47.723466 sshd[7271]: Accepted publickey for core from 10.200.16.10 port 53204 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:03:47.724821 sshd[7271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:03:47.729496 systemd-logind[1688]: New session 13 of user core. Sep 13 00:03:47.735186 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:03:48.098429 sshd[7271]: pam_unix(sshd:session): session closed for user core Sep 13 00:03:48.104518 systemd-logind[1688]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:03:48.104936 systemd[1]: sshd@10-10.200.20.14:22-10.200.16.10:53204.service: Deactivated successfully. Sep 13 00:03:48.108974 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:03:48.113154 systemd-logind[1688]: Removed session 13. Sep 13 00:03:53.175190 systemd[1]: Started sshd@11-10.200.20.14:22-10.200.16.10:51808.service - OpenSSH per-connection server daemon (10.200.16.10:51808). Sep 13 00:03:53.592368 sshd[7291]: Accepted publickey for core from 10.200.16.10 port 51808 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:03:53.593816 sshd[7291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:03:53.598527 systemd-logind[1688]: New session 14 of user core. Sep 13 00:03:53.608190 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:03:53.990780 sshd[7291]: pam_unix(sshd:session): session closed for user core Sep 13 00:03:53.994218 systemd[1]: sshd@11-10.200.20.14:22-10.200.16.10:51808.service: Deactivated successfully. Sep 13 00:03:53.996948 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:03:53.999661 systemd-logind[1688]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:03:54.001675 systemd-logind[1688]: Removed session 14. Sep 13 00:03:54.068299 systemd[1]: Started sshd@12-10.200.20.14:22-10.200.16.10:51820.service - OpenSSH per-connection server daemon (10.200.16.10:51820). Sep 13 00:03:54.477650 sshd[7305]: Accepted publickey for core from 10.200.16.10 port 51820 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:03:54.479150 sshd[7305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:03:54.482984 systemd-logind[1688]: New session 15 of user core. Sep 13 00:03:54.488271 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:03:54.894339 sshd[7305]: pam_unix(sshd:session): session closed for user core Sep 13 00:03:54.898245 systemd-logind[1688]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:03:54.898278 systemd[1]: sshd@12-10.200.20.14:22-10.200.16.10:51820.service: Deactivated successfully. Sep 13 00:03:54.900486 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:03:54.902129 systemd-logind[1688]: Removed session 15. Sep 13 00:03:54.972268 systemd[1]: Started sshd@13-10.200.20.14:22-10.200.16.10:51836.service - OpenSSH per-connection server daemon (10.200.16.10:51836). Sep 13 00:03:55.377883 sshd[7316]: Accepted publickey for core from 10.200.16.10 port 51836 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:03:55.379240 sshd[7316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:03:55.383239 systemd-logind[1688]: New session 16 of user core. Sep 13 00:03:55.387223 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:03:55.784310 sshd[7316]: pam_unix(sshd:session): session closed for user core Sep 13 00:03:55.788493 systemd[1]: sshd@13-10.200.20.14:22-10.200.16.10:51836.service: Deactivated successfully. Sep 13 00:03:55.788494 systemd-logind[1688]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:03:55.791935 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:03:55.792885 systemd-logind[1688]: Removed session 16. Sep 13 00:04:00.869670 systemd[1]: Started sshd@14-10.200.20.14:22-10.200.16.10:45996.service - OpenSSH per-connection server daemon (10.200.16.10:45996). Sep 13 00:04:01.278274 sshd[7351]: Accepted publickey for core from 10.200.16.10 port 45996 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:04:01.279684 sshd[7351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:04:01.284102 systemd-logind[1688]: New session 17 of user core. Sep 13 00:04:01.290188 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:04:01.645752 sshd[7351]: pam_unix(sshd:session): session closed for user core Sep 13 00:04:01.649607 systemd[1]: sshd@14-10.200.20.14:22-10.200.16.10:45996.service: Deactivated successfully. Sep 13 00:04:01.651823 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:04:01.652785 systemd-logind[1688]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:04:01.653868 systemd-logind[1688]: Removed session 17. Sep 13 00:04:06.730085 systemd[1]: Started sshd@15-10.200.20.14:22-10.200.16.10:46012.service - OpenSSH per-connection server daemon (10.200.16.10:46012). Sep 13 00:04:07.142126 sshd[7407]: Accepted publickey for core from 10.200.16.10 port 46012 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:04:07.144543 sshd[7407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:04:07.148894 systemd-logind[1688]: New session 18 of user core. Sep 13 00:04:07.155403 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:04:07.526097 sshd[7407]: pam_unix(sshd:session): session closed for user core Sep 13 00:04:07.530870 systemd[1]: sshd@15-10.200.20.14:22-10.200.16.10:46012.service: Deactivated successfully. Sep 13 00:04:07.535419 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:04:07.539711 systemd-logind[1688]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:04:07.541793 systemd-logind[1688]: Removed session 18. Sep 13 00:04:12.604503 systemd[1]: Started sshd@16-10.200.20.14:22-10.200.16.10:47102.service - OpenSSH per-connection server daemon (10.200.16.10:47102). Sep 13 00:04:13.013752 sshd[7461]: Accepted publickey for core from 10.200.16.10 port 47102 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:04:13.014627 sshd[7461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:04:13.019186 systemd-logind[1688]: New session 19 of user core. Sep 13 00:04:13.022186 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:04:13.391366 sshd[7461]: pam_unix(sshd:session): session closed for user core Sep 13 00:04:13.394576 systemd[1]: sshd@16-10.200.20.14:22-10.200.16.10:47102.service: Deactivated successfully. Sep 13 00:04:13.396775 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:04:13.397754 systemd-logind[1688]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:04:13.398881 systemd-logind[1688]: Removed session 19. Sep 13 00:04:18.477481 systemd[1]: Started sshd@17-10.200.20.14:22-10.200.16.10:47106.service - OpenSSH per-connection server daemon (10.200.16.10:47106). Sep 13 00:04:18.889106 sshd[7476]: Accepted publickey for core from 10.200.16.10 port 47106 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:04:18.891296 sshd[7476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:04:18.897087 systemd-logind[1688]: New session 20 of user core. Sep 13 00:04:18.901221 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:04:19.291850 sshd[7476]: pam_unix(sshd:session): session closed for user core Sep 13 00:04:19.296072 systemd-logind[1688]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:04:19.296881 systemd[1]: sshd@17-10.200.20.14:22-10.200.16.10:47106.service: Deactivated successfully. Sep 13 00:04:19.301553 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:04:19.302768 systemd-logind[1688]: Removed session 20. Sep 13 00:04:19.383039 systemd[1]: Started sshd@18-10.200.20.14:22-10.200.16.10:47116.service - OpenSSH per-connection server daemon (10.200.16.10:47116). Sep 13 00:04:19.801057 sshd[7491]: Accepted publickey for core from 10.200.16.10 port 47116 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:04:19.802912 sshd[7491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:04:19.808330 systemd-logind[1688]: New session 21 of user core. Sep 13 00:04:19.814173 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:04:20.377200 sshd[7491]: pam_unix(sshd:session): session closed for user core Sep 13 00:04:20.382011 systemd[1]: sshd@18-10.200.20.14:22-10.200.16.10:47116.service: Deactivated successfully. Sep 13 00:04:20.382202 systemd-logind[1688]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:04:20.384904 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:04:20.386410 systemd-logind[1688]: Removed session 21. Sep 13 00:04:20.463355 systemd[1]: Started sshd@19-10.200.20.14:22-10.200.16.10:39844.service - OpenSSH per-connection server daemon (10.200.16.10:39844). Sep 13 00:04:20.877383 sshd[7502]: Accepted publickey for core from 10.200.16.10 port 39844 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:04:20.878792 sshd[7502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:04:20.885299 systemd-logind[1688]: New session 22 of user core. Sep 13 00:04:20.890158 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:04:23.041528 sshd[7502]: pam_unix(sshd:session): session closed for user core Sep 13 00:04:23.044574 systemd[1]: sshd@19-10.200.20.14:22-10.200.16.10:39844.service: Deactivated successfully. Sep 13 00:04:23.047091 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:04:23.052380 systemd-logind[1688]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:04:23.053316 systemd-logind[1688]: Removed session 22. Sep 13 00:04:23.125481 systemd[1]: Started sshd@20-10.200.20.14:22-10.200.16.10:39850.service - OpenSSH per-connection server daemon (10.200.16.10:39850). Sep 13 00:04:23.547914 sshd[7526]: Accepted publickey for core from 10.200.16.10 port 39850 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:04:23.548944 sshd[7526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:04:23.555842 systemd-logind[1688]: New session 23 of user core. Sep 13 00:04:23.561232 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:04:24.028945 sshd[7526]: pam_unix(sshd:session): session closed for user core Sep 13 00:04:24.032233 systemd-logind[1688]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:04:24.033223 systemd[1]: sshd@20-10.200.20.14:22-10.200.16.10:39850.service: Deactivated successfully. Sep 13 00:04:24.035310 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:04:24.036802 systemd-logind[1688]: Removed session 23. Sep 13 00:04:24.109624 systemd[1]: Started sshd@21-10.200.20.14:22-10.200.16.10:39858.service - OpenSSH per-connection server daemon (10.200.16.10:39858). Sep 13 00:04:24.517098 sshd[7537]: Accepted publickey for core from 10.200.16.10 port 39858 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:04:24.518417 sshd[7537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:04:24.522595 systemd-logind[1688]: New session 24 of user core. Sep 13 00:04:24.526171 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 00:04:24.885744 sshd[7537]: pam_unix(sshd:session): session closed for user core Sep 13 00:04:24.889473 systemd[1]: sshd@21-10.200.20.14:22-10.200.16.10:39858.service: Deactivated successfully. Sep 13 00:04:24.891105 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:04:24.891718 systemd-logind[1688]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:04:24.892916 systemd-logind[1688]: Removed session 24. Sep 13 00:04:29.961771 systemd[1]: Started sshd@22-10.200.20.14:22-10.200.16.10:49090.service - OpenSSH per-connection server daemon (10.200.16.10:49090). Sep 13 00:04:30.374746 sshd[7574]: Accepted publickey for core from 10.200.16.10 port 49090 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:04:30.375945 sshd[7574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:04:30.379832 systemd-logind[1688]: New session 25 of user core. Sep 13 00:04:30.387190 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 00:04:30.740410 sshd[7574]: pam_unix(sshd:session): session closed for user core Sep 13 00:04:30.743918 systemd[1]: sshd@22-10.200.20.14:22-10.200.16.10:49090.service: Deactivated successfully. Sep 13 00:04:30.746054 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:04:30.746931 systemd-logind[1688]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:04:30.748067 systemd-logind[1688]: Removed session 25. Sep 13 00:04:35.824122 systemd[1]: Started sshd@23-10.200.20.14:22-10.200.16.10:49106.service - OpenSSH per-connection server daemon (10.200.16.10:49106). Sep 13 00:04:36.247984 sshd[7605]: Accepted publickey for core from 10.200.16.10 port 49106 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:04:36.249936 sshd[7605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:04:36.255089 systemd-logind[1688]: New session 26 of user core. Sep 13 00:04:36.258188 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 13 00:04:36.624282 sshd[7605]: pam_unix(sshd:session): session closed for user core Sep 13 00:04:36.629312 systemd[1]: sshd@23-10.200.20.14:22-10.200.16.10:49106.service: Deactivated successfully. Sep 13 00:04:36.633546 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:04:36.634986 systemd-logind[1688]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:04:36.635935 systemd-logind[1688]: Removed session 26. Sep 13 00:04:41.700636 systemd[1]: Started sshd@24-10.200.20.14:22-10.200.16.10:41362.service - OpenSSH per-connection server daemon (10.200.16.10:41362). Sep 13 00:04:42.118807 sshd[7660]: Accepted publickey for core from 10.200.16.10 port 41362 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:04:42.120261 sshd[7660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:04:42.125036 systemd-logind[1688]: New session 27 of user core. Sep 13 00:04:42.130188 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 13 00:04:42.483995 sshd[7660]: pam_unix(sshd:session): session closed for user core Sep 13 00:04:42.486959 systemd-logind[1688]: Session 27 logged out. Waiting for processes to exit. Sep 13 00:04:42.486962 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 00:04:42.488208 systemd[1]: sshd@24-10.200.20.14:22-10.200.16.10:41362.service: Deactivated successfully. Sep 13 00:04:42.490888 systemd-logind[1688]: Removed session 27. Sep 13 00:04:47.561758 systemd[1]: Started sshd@25-10.200.20.14:22-10.200.16.10:41364.service - OpenSSH per-connection server daemon (10.200.16.10:41364). Sep 13 00:04:47.979412 sshd[7675]: Accepted publickey for core from 10.200.16.10 port 41364 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:04:47.980770 sshd[7675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:04:47.985182 systemd-logind[1688]: New session 28 of user core. Sep 13 00:04:47.993176 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 13 00:04:48.356292 sshd[7675]: pam_unix(sshd:session): session closed for user core Sep 13 00:04:48.360291 systemd-logind[1688]: Session 28 logged out. Waiting for processes to exit. Sep 13 00:04:48.360548 systemd[1]: sshd@25-10.200.20.14:22-10.200.16.10:41364.service: Deactivated successfully. Sep 13 00:04:48.362764 systemd[1]: session-28.scope: Deactivated successfully. Sep 13 00:04:48.363583 systemd-logind[1688]: Removed session 28. Sep 13 00:04:53.435269 systemd[1]: Started sshd@26-10.200.20.14:22-10.200.16.10:33322.service - OpenSSH per-connection server daemon (10.200.16.10:33322). Sep 13 00:04:53.842606 sshd[7688]: Accepted publickey for core from 10.200.16.10 port 33322 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:04:53.843974 sshd[7688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:04:53.847997 systemd-logind[1688]: New session 29 of user core. Sep 13 00:04:53.852183 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 13 00:04:54.213528 sshd[7688]: pam_unix(sshd:session): session closed for user core Sep 13 00:04:54.216863 systemd-logind[1688]: Session 29 logged out. Waiting for processes to exit. Sep 13 00:04:54.217854 systemd[1]: sshd@26-10.200.20.14:22-10.200.16.10:33322.service: Deactivated successfully. Sep 13 00:04:54.220614 systemd[1]: session-29.scope: Deactivated successfully. Sep 13 00:04:54.221870 systemd-logind[1688]: Removed session 29. Sep 13 00:04:55.573113 systemd[1]: run-containerd-runc-k8s.io-4855551216a01897c8de6708bc5f29b16724d86a13ef32f2901ce636d3f68141-runc.Xoa9pW.mount: Deactivated successfully. Sep 13 00:04:59.301303 systemd[1]: Started sshd@27-10.200.20.14:22-10.200.16.10:33338.service - OpenSSH per-connection server daemon (10.200.16.10:33338). Sep 13 00:04:59.710955 sshd[7724]: Accepted publickey for core from 10.200.16.10 port 33338 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:04:59.712431 sshd[7724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:04:59.717879 systemd-logind[1688]: New session 30 of user core. Sep 13 00:04:59.720184 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 13 00:05:00.089512 sshd[7724]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:00.093309 systemd[1]: sshd@27-10.200.20.14:22-10.200.16.10:33338.service: Deactivated successfully. Sep 13 00:05:00.095139 systemd[1]: session-30.scope: Deactivated successfully. Sep 13 00:05:00.097539 systemd-logind[1688]: Session 30 logged out. Waiting for processes to exit. Sep 13 00:05:00.099005 systemd-logind[1688]: Removed session 30.