Sep 12 17:43:44.312215 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 12 17:43:44.312236 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 15:59:19 -00 2025 Sep 12 17:43:44.312244 kernel: KASLR enabled Sep 12 17:43:44.312249 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 12 17:43:44.312257 kernel: printk: bootconsole [pl11] enabled Sep 12 17:43:44.312262 kernel: efi: EFI v2.7 by EDK II Sep 12 17:43:44.312269 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Sep 12 17:43:44.312275 kernel: random: crng init done Sep 12 17:43:44.312281 kernel: ACPI: Early table checksum verification disabled Sep 12 17:43:44.312287 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 12 17:43:44.312293 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:43:44.312299 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:43:44.312306 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 12 17:43:44.312312 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:43:44.312320 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:43:44.312326 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:43:44.312333 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:43:44.312340 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:43:44.312347 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:43:44.312353 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 12 17:43:44.312359 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:43:44.312366 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 12 17:43:44.312372 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Sep 12 17:43:44.312378 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Sep 12 17:43:44.312384 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Sep 12 17:43:44.312391 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Sep 12 17:43:44.312397 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Sep 12 17:43:44.312403 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Sep 12 17:43:44.312411 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Sep 12 17:43:44.312417 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Sep 12 17:43:44.312423 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Sep 12 17:43:44.312430 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Sep 12 17:43:44.312436 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Sep 12 17:43:44.312442 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Sep 12 17:43:44.312448 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Sep 12 17:43:44.312454 kernel: Zone ranges: Sep 12 17:43:44.312460 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 12 17:43:44.312467 kernel: DMA32 empty Sep 12 17:43:44.312473 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 12 17:43:44.312479 kernel: Movable zone start for each node Sep 12 17:43:44.312493 kernel: Early memory node ranges Sep 12 17:43:44.312501 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 12 17:43:44.312509 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Sep 12 17:43:44.312516 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 12 17:43:44.312525 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 12 17:43:44.312534 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 12 17:43:44.312543 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 12 17:43:44.312551 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 12 17:43:44.312559 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 12 17:43:44.312567 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 12 17:43:44.312574 kernel: psci: probing for conduit method from ACPI. Sep 12 17:43:44.312596 kernel: psci: PSCIv1.1 detected in firmware. Sep 12 17:43:44.312605 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 17:43:44.312613 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 12 17:43:44.312622 kernel: psci: SMC Calling Convention v1.4 Sep 12 17:43:44.312631 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Sep 12 17:43:44.312639 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Sep 12 17:43:44.312648 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 12 17:43:44.312655 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 12 17:43:44.312662 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 12 17:43:44.312669 kernel: Detected PIPT I-cache on CPU0 Sep 12 17:43:44.312677 kernel: CPU features: detected: GIC system register CPU interface Sep 12 17:43:44.312685 kernel: CPU features: detected: Hardware dirty bit management Sep 12 17:43:44.312694 kernel: CPU features: detected: Spectre-BHB Sep 12 17:43:44.312702 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 12 17:43:44.312711 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 12 17:43:44.312719 kernel: CPU features: detected: ARM erratum 1418040 Sep 12 17:43:44.312726 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Sep 12 17:43:44.314777 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 12 17:43:44.314794 kernel: alternatives: applying boot alternatives Sep 12 17:43:44.314804 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=1e63d3057914877efa0eb5f75703bd3a3d4c120bdf4a7ab97f41083e29183e56 Sep 12 17:43:44.314812 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:43:44.314819 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:43:44.314826 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:43:44.314833 kernel: Fallback order for Node 0: 0 Sep 12 17:43:44.314840 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Sep 12 17:43:44.314847 kernel: Policy zone: Normal Sep 12 17:43:44.314854 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:43:44.314860 kernel: software IO TLB: area num 2. Sep 12 17:43:44.314872 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Sep 12 17:43:44.314879 kernel: Memory: 3982564K/4194160K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 211596K reserved, 0K cma-reserved) Sep 12 17:43:44.314886 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:43:44.314893 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:43:44.314901 kernel: rcu: RCU event tracing is enabled. Sep 12 17:43:44.314908 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:43:44.314915 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:43:44.314921 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:43:44.314928 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:43:44.314935 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:43:44.314941 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 17:43:44.314950 kernel: GICv3: 960 SPIs implemented Sep 12 17:43:44.314957 kernel: GICv3: 0 Extended SPIs implemented Sep 12 17:43:44.314963 kernel: Root IRQ handler: gic_handle_irq Sep 12 17:43:44.314970 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 12 17:43:44.314977 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 12 17:43:44.314983 kernel: ITS: No ITS available, not enabling LPIs Sep 12 17:43:44.314990 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:43:44.314997 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:43:44.315004 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 12 17:43:44.315011 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 12 17:43:44.315018 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 12 17:43:44.315026 kernel: Console: colour dummy device 80x25 Sep 12 17:43:44.315033 kernel: printk: console [tty1] enabled Sep 12 17:43:44.315040 kernel: ACPI: Core revision 20230628 Sep 12 17:43:44.315048 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 12 17:43:44.315055 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:43:44.315062 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:43:44.315068 kernel: landlock: Up and running. Sep 12 17:43:44.315075 kernel: SELinux: Initializing. Sep 12 17:43:44.315083 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:43:44.315090 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:43:44.315098 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:43:44.315105 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:43:44.315113 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Sep 12 17:43:44.315120 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Sep 12 17:43:44.315126 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 12 17:43:44.315133 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:43:44.315141 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:43:44.315154 kernel: Remapping and enabling EFI services. Sep 12 17:43:44.315161 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:43:44.315168 kernel: Detected PIPT I-cache on CPU1 Sep 12 17:43:44.315176 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 12 17:43:44.315184 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:43:44.315192 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 12 17:43:44.315199 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:43:44.315206 kernel: SMP: Total of 2 processors activated. Sep 12 17:43:44.315214 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 17:43:44.315223 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 12 17:43:44.315230 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 12 17:43:44.315238 kernel: CPU features: detected: CRC32 instructions Sep 12 17:43:44.315245 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 12 17:43:44.315252 kernel: CPU features: detected: LSE atomic instructions Sep 12 17:43:44.315259 kernel: CPU features: detected: Privileged Access Never Sep 12 17:43:44.315266 kernel: CPU: All CPU(s) started at EL1 Sep 12 17:43:44.315274 kernel: alternatives: applying system-wide alternatives Sep 12 17:43:44.315281 kernel: devtmpfs: initialized Sep 12 17:43:44.315290 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:43:44.315297 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:43:44.315304 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:43:44.315312 kernel: SMBIOS 3.1.0 present. Sep 12 17:43:44.315319 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 12 17:43:44.315327 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:43:44.315334 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 17:43:44.315341 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 17:43:44.315349 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 17:43:44.315358 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:43:44.315365 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Sep 12 17:43:44.315372 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:43:44.315379 kernel: cpuidle: using governor menu Sep 12 17:43:44.315387 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 17:43:44.315395 kernel: ASID allocator initialised with 32768 entries Sep 12 17:43:44.315402 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:43:44.315409 kernel: Serial: AMBA PL011 UART driver Sep 12 17:43:44.315416 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 12 17:43:44.315425 kernel: Modules: 0 pages in range for non-PLT usage Sep 12 17:43:44.315432 kernel: Modules: 508992 pages in range for PLT usage Sep 12 17:43:44.315440 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:43:44.315447 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:43:44.315455 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 17:43:44.315462 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 17:43:44.315469 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:43:44.315477 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:43:44.315484 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 17:43:44.315493 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 17:43:44.315500 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:43:44.315507 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:43:44.315515 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:43:44.315522 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:43:44.315529 kernel: ACPI: Interpreter enabled Sep 12 17:43:44.315536 kernel: ACPI: Using GIC for interrupt routing Sep 12 17:43:44.315544 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 12 17:43:44.315551 kernel: printk: console [ttyAMA0] enabled Sep 12 17:43:44.315560 kernel: printk: bootconsole [pl11] disabled Sep 12 17:43:44.315567 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 12 17:43:44.315574 kernel: iommu: Default domain type: Translated Sep 12 17:43:44.315581 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 17:43:44.315589 kernel: efivars: Registered efivars operations Sep 12 17:43:44.315596 kernel: vgaarb: loaded Sep 12 17:43:44.315603 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 17:43:44.315610 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:43:44.315618 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:43:44.315626 kernel: pnp: PnP ACPI init Sep 12 17:43:44.315634 kernel: pnp: PnP ACPI: found 0 devices Sep 12 17:43:44.315641 kernel: NET: Registered PF_INET protocol family Sep 12 17:43:44.315649 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:43:44.315656 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:43:44.315663 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:43:44.315671 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:43:44.315678 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:43:44.315685 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:43:44.315694 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:43:44.315702 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:43:44.315709 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:43:44.315716 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:43:44.315723 kernel: kvm [1]: HYP mode not available Sep 12 17:43:44.315731 kernel: Initialise system trusted keyrings Sep 12 17:43:44.315749 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:43:44.315757 kernel: Key type asymmetric registered Sep 12 17:43:44.315764 kernel: Asymmetric key parser 'x509' registered Sep 12 17:43:44.315773 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 17:43:44.315780 kernel: io scheduler mq-deadline registered Sep 12 17:43:44.315788 kernel: io scheduler kyber registered Sep 12 17:43:44.315795 kernel: io scheduler bfq registered Sep 12 17:43:44.315802 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:43:44.315809 kernel: thunder_xcv, ver 1.0 Sep 12 17:43:44.315816 kernel: thunder_bgx, ver 1.0 Sep 12 17:43:44.315824 kernel: nicpf, ver 1.0 Sep 12 17:43:44.315831 kernel: nicvf, ver 1.0 Sep 12 17:43:44.315982 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 17:43:44.316058 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T17:43:43 UTC (1757699023) Sep 12 17:43:44.316068 kernel: efifb: probing for efifb Sep 12 17:43:44.316076 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 12 17:43:44.316083 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 12 17:43:44.316091 kernel: efifb: scrolling: redraw Sep 12 17:43:44.316098 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 17:43:44.316105 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 17:43:44.316114 kernel: fb0: EFI VGA frame buffer device Sep 12 17:43:44.316122 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 12 17:43:44.316129 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 17:43:44.316136 kernel: No ACPI PMU IRQ for CPU0 Sep 12 17:43:44.316143 kernel: No ACPI PMU IRQ for CPU1 Sep 12 17:43:44.316151 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Sep 12 17:43:44.316158 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 12 17:43:44.316165 kernel: watchdog: Hard watchdog permanently disabled Sep 12 17:43:44.316172 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:43:44.316181 kernel: Segment Routing with IPv6 Sep 12 17:43:44.316188 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:43:44.316195 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:43:44.316202 kernel: Key type dns_resolver registered Sep 12 17:43:44.316209 kernel: registered taskstats version 1 Sep 12 17:43:44.316217 kernel: Loading compiled-in X.509 certificates Sep 12 17:43:44.316224 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 2d576b5e69e6c5de2f731966fe8b55173c144d02' Sep 12 17:43:44.316231 kernel: Key type .fscrypt registered Sep 12 17:43:44.316238 kernel: Key type fscrypt-provisioning registered Sep 12 17:43:44.316247 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:43:44.316254 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:43:44.316262 kernel: ima: No architecture policies found Sep 12 17:43:44.316269 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 17:43:44.316276 kernel: clk: Disabling unused clocks Sep 12 17:43:44.316283 kernel: Freeing unused kernel memory: 39488K Sep 12 17:43:44.316291 kernel: Run /init as init process Sep 12 17:43:44.316298 kernel: with arguments: Sep 12 17:43:44.316305 kernel: /init Sep 12 17:43:44.316313 kernel: with environment: Sep 12 17:43:44.316320 kernel: HOME=/ Sep 12 17:43:44.316327 kernel: TERM=linux Sep 12 17:43:44.316335 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:43:44.316344 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:43:44.316353 systemd[1]: Detected virtualization microsoft. Sep 12 17:43:44.316361 systemd[1]: Detected architecture arm64. Sep 12 17:43:44.316369 systemd[1]: Running in initrd. Sep 12 17:43:44.316378 systemd[1]: No hostname configured, using default hostname. Sep 12 17:43:44.316386 systemd[1]: Hostname set to . Sep 12 17:43:44.316394 systemd[1]: Initializing machine ID from random generator. Sep 12 17:43:44.316401 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:43:44.316410 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:43:44.316418 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:43:44.316426 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:43:44.316434 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:43:44.316444 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:43:44.316453 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:43:44.316462 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:43:44.316470 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:43:44.316478 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:43:44.316486 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:43:44.316504 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:43:44.316513 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:43:44.316521 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:43:44.316528 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:43:44.316536 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:43:44.316544 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:43:44.316552 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:43:44.316560 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:43:44.316568 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:43:44.316577 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:43:44.316585 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:43:44.316594 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:43:44.316602 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:43:44.316610 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:43:44.316618 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:43:44.316626 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:43:44.316634 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:43:44.316642 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:43:44.316670 systemd-journald[217]: Collecting audit messages is disabled. Sep 12 17:43:44.316689 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:43:44.316698 systemd-journald[217]: Journal started Sep 12 17:43:44.316718 systemd-journald[217]: Runtime Journal (/run/log/journal/05789322bf8b4baf8f40a8e5dedf8d74) is 8.0M, max 78.5M, 70.5M free. Sep 12 17:43:44.328774 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:43:44.327851 systemd-modules-load[218]: Inserted module 'overlay' Sep 12 17:43:44.360759 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:43:44.360119 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:43:44.375017 kernel: Bridge firewalling registered Sep 12 17:43:44.369024 systemd-modules-load[218]: Inserted module 'br_netfilter' Sep 12 17:43:44.370362 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:43:44.382317 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:43:44.391148 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:43:44.403560 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:43:44.425984 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:43:44.440028 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:43:44.450897 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:43:44.478928 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:43:44.486365 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:43:44.500645 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:43:44.507146 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:43:44.522943 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:43:44.551951 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:43:44.564050 dracut-cmdline[249]: dracut-dracut-053 Sep 12 17:43:44.564050 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=1e63d3057914877efa0eb5f75703bd3a3d4c120bdf4a7ab97f41083e29183e56 Sep 12 17:43:44.568591 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:43:44.616956 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:43:44.629930 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:43:44.662176 systemd-resolved[260]: Positive Trust Anchors: Sep 12 17:43:44.662193 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:43:44.662226 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:43:44.664432 systemd-resolved[260]: Defaulting to hostname 'linux'. Sep 12 17:43:44.666922 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:43:44.673903 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:43:44.769768 kernel: SCSI subsystem initialized Sep 12 17:43:44.777766 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:43:44.787759 kernel: iscsi: registered transport (tcp) Sep 12 17:43:44.805761 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:43:44.805830 kernel: QLogic iSCSI HBA Driver Sep 12 17:43:44.844208 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:43:44.862866 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:43:44.893100 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:43:44.893139 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:43:44.899769 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:43:44.947765 kernel: raid6: neonx8 gen() 15745 MB/s Sep 12 17:43:44.967747 kernel: raid6: neonx4 gen() 15670 MB/s Sep 12 17:43:44.987744 kernel: raid6: neonx2 gen() 13239 MB/s Sep 12 17:43:45.008746 kernel: raid6: neonx1 gen() 10523 MB/s Sep 12 17:43:45.028744 kernel: raid6: int64x8 gen() 6960 MB/s Sep 12 17:43:45.048748 kernel: raid6: int64x4 gen() 7353 MB/s Sep 12 17:43:45.069745 kernel: raid6: int64x2 gen() 6133 MB/s Sep 12 17:43:45.093083 kernel: raid6: int64x1 gen() 5061 MB/s Sep 12 17:43:45.093103 kernel: raid6: using algorithm neonx8 gen() 15745 MB/s Sep 12 17:43:45.117179 kernel: raid6: .... xor() 12062 MB/s, rmw enabled Sep 12 17:43:45.117220 kernel: raid6: using neon recovery algorithm Sep 12 17:43:45.129421 kernel: xor: measuring software checksum speed Sep 12 17:43:45.129437 kernel: 8regs : 19778 MB/sec Sep 12 17:43:45.137857 kernel: 32regs : 18760 MB/sec Sep 12 17:43:45.137869 kernel: arm64_neon : 26280 MB/sec Sep 12 17:43:45.142156 kernel: xor: using function: arm64_neon (26280 MB/sec) Sep 12 17:43:45.193768 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:43:45.203056 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:43:45.220929 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:43:45.243578 systemd-udevd[436]: Using default interface naming scheme 'v255'. Sep 12 17:43:45.249159 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:43:45.275988 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:43:45.287909 dracut-pre-trigger[440]: rd.md=0: removing MD RAID activation Sep 12 17:43:45.314981 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:43:45.330295 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:43:45.370327 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:43:45.387952 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:43:45.415933 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:43:45.426479 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:43:45.445315 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:43:45.466218 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:43:45.499757 kernel: hv_vmbus: Vmbus version:5.3 Sep 12 17:43:45.500980 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:43:45.523523 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:43:45.540666 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 12 17:43:45.540689 kernel: hv_vmbus: registering driver hid_hyperv Sep 12 17:43:45.540703 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 12 17:43:45.529949 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:43:45.567879 kernel: hv_vmbus: registering driver hv_netvsc Sep 12 17:43:45.567900 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 12 17:43:45.567609 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:43:45.588248 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 12 17:43:45.588267 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 12 17:43:45.595085 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:43:45.607143 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 12 17:43:45.600625 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:43:45.628244 kernel: hv_vmbus: registering driver hv_storvsc Sep 12 17:43:45.621674 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:43:45.642802 kernel: scsi host0: storvsc_host_t Sep 12 17:43:45.642964 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 12 17:43:45.652635 kernel: scsi host1: storvsc_host_t Sep 12 17:43:45.654478 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:43:45.676829 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 12 17:43:45.664712 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:43:45.692403 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:43:45.692503 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:43:45.719020 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:43:45.747907 kernel: PTP clock support registered Sep 12 17:43:45.747938 kernel: hv_utils: Registering HyperV Utility Driver Sep 12 17:43:45.747948 kernel: hv_vmbus: registering driver hv_utils Sep 12 17:43:45.740975 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:43:46.177599 kernel: hv_netvsc 00224879-249a-0022-4879-249a00224879 eth0: VF slot 1 added Sep 12 17:43:46.177746 kernel: hv_utils: Heartbeat IC version 3.0 Sep 12 17:43:46.177758 kernel: hv_utils: Shutdown IC version 3.2 Sep 12 17:43:46.177768 kernel: hv_utils: TimeSync IC version 4.0 Sep 12 17:43:46.177776 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 12 17:43:46.165157 systemd-resolved[260]: Clock change detected. Flushing caches. Sep 12 17:43:46.178099 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:43:46.210095 kernel: hv_vmbus: registering driver hv_pci Sep 12 17:43:46.210119 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 17:43:46.210129 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 12 17:43:46.225257 kernel: hv_pci 14b82d14-8dab-4122-bf57-d3f9659b8c85: PCI VMBus probing: Using version 0x10004 Sep 12 17:43:46.243381 kernel: hv_pci 14b82d14-8dab-4122-bf57-d3f9659b8c85: PCI host bridge to bus 8dab:00 Sep 12 17:43:46.243558 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 12 17:43:46.243670 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 12 17:43:46.243762 kernel: pci_bus 8dab:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 12 17:43:46.233010 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:43:46.298313 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 12 17:43:46.298476 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 12 17:43:46.298578 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 12 17:43:46.298670 kernel: pci_bus 8dab:00: No busn resource found for root bus, will use [bus 00-ff] Sep 12 17:43:46.298771 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:43:46.298785 kernel: pci 8dab:00:02.0: [15b3:1018] type 00 class 0x020000 Sep 12 17:43:46.298807 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 12 17:43:46.298900 kernel: pci 8dab:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 12 17:43:46.309294 kernel: pci 8dab:00:02.0: enabling Extended Tags Sep 12 17:43:46.329249 kernel: pci 8dab:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 8dab:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Sep 12 17:43:46.340634 kernel: pci_bus 8dab:00: busn_res: [bus 00-ff] end is updated to 00 Sep 12 17:43:46.340847 kernel: pci 8dab:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 12 17:43:46.382403 kernel: mlx5_core 8dab:00:02.0: enabling device (0000 -> 0002) Sep 12 17:43:46.389243 kernel: mlx5_core 8dab:00:02.0: firmware version: 16.31.2424 Sep 12 17:43:46.669089 kernel: hv_netvsc 00224879-249a-0022-4879-249a00224879 eth0: VF registering: eth1 Sep 12 17:43:46.669328 kernel: mlx5_core 8dab:00:02.0 eth1: joined to eth0 Sep 12 17:43:46.683310 kernel: mlx5_core 8dab:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Sep 12 17:43:46.698276 kernel: mlx5_core 8dab:00:02.0 enP36267s1: renamed from eth1 Sep 12 17:43:46.823483 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 12 17:43:46.925274 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (497) Sep 12 17:43:46.941741 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 12 17:43:46.965677 kernel: BTRFS: device fsid 5a23a06a-00d4-4606-89bf-13e31a563129 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (505) Sep 12 17:43:46.980818 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 12 17:43:46.988101 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 12 17:43:47.012608 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 12 17:43:47.037492 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:43:47.066281 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:43:47.076256 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:43:47.086251 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:43:48.087255 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:43:48.088257 disk-uuid[605]: The operation has completed successfully. Sep 12 17:43:48.152267 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:43:48.152360 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:43:48.187382 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:43:48.200823 sh[718]: Success Sep 12 17:43:48.240264 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 12 17:43:48.582494 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:43:48.599365 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:43:48.609301 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:43:48.647009 kernel: BTRFS info (device dm-0): first mount of filesystem 5a23a06a-00d4-4606-89bf-13e31a563129 Sep 12 17:43:48.647067 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:43:48.654650 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:43:48.660160 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:43:48.664691 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:43:49.223076 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:43:49.228714 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:43:49.249502 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:43:49.262403 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:43:49.295773 kernel: BTRFS info (device sda6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:43:49.295794 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:43:49.295804 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:43:49.356283 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:43:49.365582 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:43:49.378279 kernel: BTRFS info (device sda6): last unmount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:43:49.384389 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:43:49.397460 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:43:49.405662 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:43:49.424775 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:43:49.462670 systemd-networkd[902]: lo: Link UP Sep 12 17:43:49.466439 systemd-networkd[902]: lo: Gained carrier Sep 12 17:43:49.468097 systemd-networkd[902]: Enumeration completed Sep 12 17:43:49.468473 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:43:49.475490 systemd[1]: Reached target network.target - Network. Sep 12 17:43:49.479376 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:43:49.479379 systemd-networkd[902]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:43:49.574353 kernel: mlx5_core 8dab:00:02.0 enP36267s1: Link up Sep 12 17:43:49.574590 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 12 17:43:49.649266 kernel: hv_netvsc 00224879-249a-0022-4879-249a00224879 eth0: Data path switched to VF: enP36267s1 Sep 12 17:43:49.649923 systemd-networkd[902]: enP36267s1: Link UP Sep 12 17:43:49.650006 systemd-networkd[902]: eth0: Link UP Sep 12 17:43:49.650096 systemd-networkd[902]: eth0: Gained carrier Sep 12 17:43:49.650106 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:43:49.662427 systemd-networkd[902]: enP36267s1: Gained carrier Sep 12 17:43:49.685277 systemd-networkd[902]: eth0: DHCPv4 address 10.200.20.46/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 12 17:43:50.648261 ignition[900]: Ignition 2.19.0 Sep 12 17:43:50.648272 ignition[900]: Stage: fetch-offline Sep 12 17:43:50.648314 ignition[900]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:43:50.656345 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:43:50.648322 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:43:50.671541 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:43:50.648431 ignition[900]: parsed url from cmdline: "" Sep 12 17:43:50.648434 ignition[900]: no config URL provided Sep 12 17:43:50.648439 ignition[900]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:43:50.648446 ignition[900]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:43:50.648451 ignition[900]: failed to fetch config: resource requires networking Sep 12 17:43:50.652173 ignition[900]: Ignition finished successfully Sep 12 17:43:50.715302 ignition[916]: Ignition 2.19.0 Sep 12 17:43:50.715308 ignition[916]: Stage: fetch Sep 12 17:43:50.715495 ignition[916]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:43:50.715504 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:43:50.715606 ignition[916]: parsed url from cmdline: "" Sep 12 17:43:50.715609 ignition[916]: no config URL provided Sep 12 17:43:50.715613 ignition[916]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:43:50.715619 ignition[916]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:43:50.715640 ignition[916]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 12 17:43:50.797763 ignition[916]: GET result: OK Sep 12 17:43:50.797831 ignition[916]: config has been read from IMDS userdata Sep 12 17:43:50.797874 ignition[916]: parsing config with SHA512: 111feba24a6a5549bd8317767a61fe94a10f4dc0dbd574616e8dceb0f4a726f19d3cc789498cd3742668ec09d1043104fcafa18a83950285d01532fadeeee718 Sep 12 17:43:50.801530 unknown[916]: fetched base config from "system" Sep 12 17:43:50.801885 ignition[916]: fetch: fetch complete Sep 12 17:43:50.801537 unknown[916]: fetched base config from "system" Sep 12 17:43:50.801889 ignition[916]: fetch: fetch passed Sep 12 17:43:50.801543 unknown[916]: fetched user config from "azure" Sep 12 17:43:50.801926 ignition[916]: Ignition finished successfully Sep 12 17:43:50.808069 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:43:50.818568 systemd-networkd[902]: eth0: Gained IPv6LL Sep 12 17:43:50.848726 ignition[923]: Ignition 2.19.0 Sep 12 17:43:50.831528 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:43:50.848732 ignition[923]: Stage: kargs Sep 12 17:43:50.851713 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:43:50.848915 ignition[923]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:43:50.848924 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:43:50.881531 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:43:50.849971 ignition[923]: kargs: kargs passed Sep 12 17:43:50.907524 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:43:50.850029 ignition[923]: Ignition finished successfully Sep 12 17:43:50.915445 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:43:50.902441 ignition[929]: Ignition 2.19.0 Sep 12 17:43:50.925060 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:43:50.902448 ignition[929]: Stage: disks Sep 12 17:43:50.938969 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:43:50.902664 ignition[929]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:43:50.948792 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:43:50.902673 ignition[929]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:43:50.960261 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:43:50.903774 ignition[929]: disks: disks passed Sep 12 17:43:50.984504 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:43:50.903818 ignition[929]: Ignition finished successfully Sep 12 17:43:51.085761 systemd-fsck[938]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 12 17:43:51.098085 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:43:51.117426 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:43:51.186273 kernel: EXT4-fs (sda9): mounted filesystem fc6c61a7-153d-4e7f-95c0-bffdb4824d71 r/w with ordered data mode. Quota mode: none. Sep 12 17:43:51.186493 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:43:51.191716 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:43:51.238316 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:43:51.265585 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (949) Sep 12 17:43:51.263220 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:43:51.271415 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 17:43:51.302592 kernel: BTRFS info (device sda6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:43:51.302615 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:43:51.289248 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:43:51.328472 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:43:51.289282 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:43:51.316717 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:43:51.342457 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:43:51.363807 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:43:51.365613 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:43:51.955623 coreos-metadata[951]: Sep 12 17:43:51.955 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 12 17:43:51.966106 coreos-metadata[951]: Sep 12 17:43:51.966 INFO Fetch successful Sep 12 17:43:51.966106 coreos-metadata[951]: Sep 12 17:43:51.966 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 12 17:43:51.984341 coreos-metadata[951]: Sep 12 17:43:51.980 INFO Fetch successful Sep 12 17:43:51.998947 coreos-metadata[951]: Sep 12 17:43:51.998 INFO wrote hostname ci-4081.3.6-a-ca65cd0ccc to /sysroot/etc/hostname Sep 12 17:43:52.009266 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:43:52.344407 initrd-setup-root[978]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:43:52.397285 initrd-setup-root[985]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:43:52.433681 initrd-setup-root[992]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:43:52.442521 initrd-setup-root[999]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:43:53.841083 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:43:53.856457 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:43:53.868502 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:43:53.886013 kernel: BTRFS info (device sda6): last unmount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:43:53.881261 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:43:53.911774 ignition[1067]: INFO : Ignition 2.19.0 Sep 12 17:43:53.917349 ignition[1067]: INFO : Stage: mount Sep 12 17:43:53.917349 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:43:53.917349 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:43:53.917349 ignition[1067]: INFO : mount: mount passed Sep 12 17:43:53.917349 ignition[1067]: INFO : Ignition finished successfully Sep 12 17:43:53.912958 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:43:53.924577 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:43:53.954467 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:43:53.974473 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:43:54.008249 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1078) Sep 12 17:43:54.022478 kernel: BTRFS info (device sda6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:43:54.022516 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:43:54.026862 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:43:54.035249 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:43:54.036808 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:43:54.060655 ignition[1095]: INFO : Ignition 2.19.0 Sep 12 17:43:54.060655 ignition[1095]: INFO : Stage: files Sep 12 17:43:54.060655 ignition[1095]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:43:54.060655 ignition[1095]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:43:54.060655 ignition[1095]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:43:54.122985 ignition[1095]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:43:54.122985 ignition[1095]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:43:54.257727 ignition[1095]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:43:54.265366 ignition[1095]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:43:54.265366 ignition[1095]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:43:54.258100 unknown[1095]: wrote ssh authorized keys file for user: core Sep 12 17:43:54.296385 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 12 17:43:54.307778 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 12 17:43:54.336383 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:43:54.570873 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 12 17:43:54.581992 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:43:54.581992 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:43:54.581992 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:43:54.581992 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:43:54.581992 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:43:54.581992 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:43:54.581992 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:43:54.581992 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:43:54.661299 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:43:54.661299 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:43:54.661299 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:43:54.661299 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:43:54.661299 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:43:54.661299 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 12 17:43:55.124298 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:43:55.363412 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:43:55.363412 ignition[1095]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:43:55.421190 ignition[1095]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:43:55.432581 ignition[1095]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:43:55.432581 ignition[1095]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:43:55.432581 ignition[1095]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:43:55.432581 ignition[1095]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:43:55.432581 ignition[1095]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:43:55.432581 ignition[1095]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:43:55.432581 ignition[1095]: INFO : files: files passed Sep 12 17:43:55.432581 ignition[1095]: INFO : Ignition finished successfully Sep 12 17:43:55.433278 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:43:55.471472 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:43:55.489391 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:43:55.505435 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:43:55.507595 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:43:55.554061 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:43:55.554061 initrd-setup-root-after-ignition[1123]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:43:55.577836 initrd-setup-root-after-ignition[1127]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:43:55.555285 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:43:55.569325 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:43:55.606480 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:43:55.636400 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:43:55.636516 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:43:55.648753 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:43:55.661000 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:43:55.671711 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:43:55.686500 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:43:55.708778 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:43:55.725491 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:43:55.743669 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:43:55.750517 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:43:55.762831 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:43:55.774012 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:43:55.774141 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:43:55.790071 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:43:55.796470 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:43:55.807959 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:43:55.819242 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:43:55.830023 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:43:55.841768 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:43:55.853283 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:43:55.865789 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:43:55.876449 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:43:55.888927 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:43:55.898874 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:43:55.899004 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:43:55.913950 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:43:55.919966 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:43:55.931945 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:43:55.937096 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:43:55.944372 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:43:55.944500 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:43:55.961433 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:43:55.961550 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:43:55.968356 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:43:55.968446 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:43:55.978662 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 17:43:55.978754 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:43:56.004498 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:43:56.060743 ignition[1147]: INFO : Ignition 2.19.0 Sep 12 17:43:56.060743 ignition[1147]: INFO : Stage: umount Sep 12 17:43:56.060743 ignition[1147]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:43:56.060743 ignition[1147]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:43:56.060743 ignition[1147]: INFO : umount: umount passed Sep 12 17:43:56.060743 ignition[1147]: INFO : Ignition finished successfully Sep 12 17:43:56.016431 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:43:56.016590 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:43:56.063143 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:43:56.077427 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:43:56.077605 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:43:56.089357 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:43:56.089452 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:43:56.111471 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:43:56.111575 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:43:56.124903 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:43:56.125879 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:43:56.126143 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:43:56.138121 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:43:56.138183 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:43:56.149917 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:43:56.149978 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:43:56.160711 systemd[1]: Stopped target network.target - Network. Sep 12 17:43:56.172481 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:43:56.172556 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:43:56.184875 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:43:56.196295 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:43:56.206174 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:43:56.213257 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:43:56.224989 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:43:56.235001 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:43:56.235072 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:43:56.245622 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:43:56.245683 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:43:56.256316 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:43:56.256371 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:43:56.266838 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:43:56.266884 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:43:56.277981 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:43:56.293278 systemd-networkd[902]: eth0: DHCPv6 lease lost Sep 12 17:43:56.294314 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:43:56.311278 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:43:56.311386 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:43:56.318134 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:43:56.318223 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:43:56.326049 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:43:56.326165 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:43:56.340068 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:43:56.340124 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:43:56.364451 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:43:56.582277 kernel: hv_netvsc 00224879-249a-0022-4879-249a00224879 eth0: Data path switched from VF: enP36267s1 Sep 12 17:43:56.374029 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:43:56.374106 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:43:56.385620 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:43:56.385682 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:43:56.401481 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:43:56.401534 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:43:56.413387 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:43:56.413435 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:43:56.426857 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:43:56.461666 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:43:56.461754 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:43:56.471938 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:43:56.472070 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:43:56.486307 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:43:56.486395 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:43:56.496684 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:43:56.496719 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:43:56.508353 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:43:56.508411 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:43:56.525908 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:43:56.525962 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:43:56.536499 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:43:56.536552 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:43:56.556807 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:43:56.556865 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:43:56.588455 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:43:56.605543 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:43:56.605631 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:43:56.617750 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:43:56.617801 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:43:56.629604 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:43:56.629700 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:43:56.704484 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:43:56.704637 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:43:56.714645 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:43:56.735429 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:43:56.754312 systemd[1]: Switching root. Sep 12 17:43:56.869874 systemd-journald[217]: Journal stopped Sep 12 17:43:44.312215 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 12 17:43:44.312236 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 15:59:19 -00 2025 Sep 12 17:43:44.312244 kernel: KASLR enabled Sep 12 17:43:44.312249 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 12 17:43:44.312257 kernel: printk: bootconsole [pl11] enabled Sep 12 17:43:44.312262 kernel: efi: EFI v2.7 by EDK II Sep 12 17:43:44.312269 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Sep 12 17:43:44.312275 kernel: random: crng init done Sep 12 17:43:44.312281 kernel: ACPI: Early table checksum verification disabled Sep 12 17:43:44.312287 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 12 17:43:44.312293 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:43:44.312299 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:43:44.312306 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 12 17:43:44.312312 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:43:44.312320 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:43:44.312326 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:43:44.312333 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:43:44.312340 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:43:44.312347 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:43:44.312353 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 12 17:43:44.312359 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:43:44.312366 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 12 17:43:44.312372 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Sep 12 17:43:44.312378 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Sep 12 17:43:44.312384 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Sep 12 17:43:44.312391 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Sep 12 17:43:44.312397 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Sep 12 17:43:44.312403 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Sep 12 17:43:44.312411 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Sep 12 17:43:44.312417 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Sep 12 17:43:44.312423 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Sep 12 17:43:44.312430 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Sep 12 17:43:44.312436 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Sep 12 17:43:44.312442 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Sep 12 17:43:44.312448 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Sep 12 17:43:44.312454 kernel: Zone ranges: Sep 12 17:43:44.312460 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 12 17:43:44.312467 kernel: DMA32 empty Sep 12 17:43:44.312473 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 12 17:43:44.312479 kernel: Movable zone start for each node Sep 12 17:43:44.312493 kernel: Early memory node ranges Sep 12 17:43:44.312501 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 12 17:43:44.312509 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Sep 12 17:43:44.312516 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 12 17:43:44.312525 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 12 17:43:44.312534 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 12 17:43:44.312543 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 12 17:43:44.312551 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 12 17:43:44.312559 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 12 17:43:44.312567 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 12 17:43:44.312574 kernel: psci: probing for conduit method from ACPI. Sep 12 17:43:44.312596 kernel: psci: PSCIv1.1 detected in firmware. Sep 12 17:43:44.312605 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 17:43:44.312613 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 12 17:43:44.312622 kernel: psci: SMC Calling Convention v1.4 Sep 12 17:43:44.312631 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Sep 12 17:43:44.312639 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Sep 12 17:43:44.312648 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 12 17:43:44.312655 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 12 17:43:44.312662 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 12 17:43:44.312669 kernel: Detected PIPT I-cache on CPU0 Sep 12 17:43:44.312677 kernel: CPU features: detected: GIC system register CPU interface Sep 12 17:43:44.312685 kernel: CPU features: detected: Hardware dirty bit management Sep 12 17:43:44.312694 kernel: CPU features: detected: Spectre-BHB Sep 12 17:43:44.312702 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 12 17:43:44.312711 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 12 17:43:44.312719 kernel: CPU features: detected: ARM erratum 1418040 Sep 12 17:43:44.312726 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Sep 12 17:43:44.314777 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 12 17:43:44.314794 kernel: alternatives: applying boot alternatives Sep 12 17:43:44.314804 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=1e63d3057914877efa0eb5f75703bd3a3d4c120bdf4a7ab97f41083e29183e56 Sep 12 17:43:44.314812 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:43:44.314819 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:43:44.314826 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:43:44.314833 kernel: Fallback order for Node 0: 0 Sep 12 17:43:44.314840 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Sep 12 17:43:44.314847 kernel: Policy zone: Normal Sep 12 17:43:44.314854 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:43:44.314860 kernel: software IO TLB: area num 2. Sep 12 17:43:44.314872 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Sep 12 17:43:44.314879 kernel: Memory: 3982564K/4194160K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 211596K reserved, 0K cma-reserved) Sep 12 17:43:44.314886 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:43:44.314893 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:43:44.314901 kernel: rcu: RCU event tracing is enabled. Sep 12 17:43:44.314908 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:43:44.314915 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:43:44.314921 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:43:44.314928 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:43:44.314935 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:43:44.314941 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 17:43:44.314950 kernel: GICv3: 960 SPIs implemented Sep 12 17:43:44.314957 kernel: GICv3: 0 Extended SPIs implemented Sep 12 17:43:44.314963 kernel: Root IRQ handler: gic_handle_irq Sep 12 17:43:44.314970 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 12 17:43:44.314977 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 12 17:43:44.314983 kernel: ITS: No ITS available, not enabling LPIs Sep 12 17:43:44.314990 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:43:44.314997 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:43:44.315004 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 12 17:43:44.315011 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 12 17:43:44.315018 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 12 17:43:44.315026 kernel: Console: colour dummy device 80x25 Sep 12 17:43:44.315033 kernel: printk: console [tty1] enabled Sep 12 17:43:44.315040 kernel: ACPI: Core revision 20230628 Sep 12 17:43:44.315048 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 12 17:43:44.315055 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:43:44.315062 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:43:44.315068 kernel: landlock: Up and running. Sep 12 17:43:44.315075 kernel: SELinux: Initializing. Sep 12 17:43:44.315083 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:43:44.315090 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:43:44.315098 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:43:44.315105 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:43:44.315113 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Sep 12 17:43:44.315120 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Sep 12 17:43:44.315126 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 12 17:43:44.315133 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:43:44.315141 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:43:44.315154 kernel: Remapping and enabling EFI services. Sep 12 17:43:44.315161 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:43:44.315168 kernel: Detected PIPT I-cache on CPU1 Sep 12 17:43:44.315176 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 12 17:43:44.315184 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:43:44.315192 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 12 17:43:44.315199 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:43:44.315206 kernel: SMP: Total of 2 processors activated. Sep 12 17:43:44.315214 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 17:43:44.315223 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 12 17:43:44.315230 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 12 17:43:44.315238 kernel: CPU features: detected: CRC32 instructions Sep 12 17:43:44.315245 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 12 17:43:44.315252 kernel: CPU features: detected: LSE atomic instructions Sep 12 17:43:44.315259 kernel: CPU features: detected: Privileged Access Never Sep 12 17:43:44.315266 kernel: CPU: All CPU(s) started at EL1 Sep 12 17:43:44.315274 kernel: alternatives: applying system-wide alternatives Sep 12 17:43:44.315281 kernel: devtmpfs: initialized Sep 12 17:43:44.315290 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:43:44.315297 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:43:44.315304 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:43:44.315312 kernel: SMBIOS 3.1.0 present. Sep 12 17:43:44.315319 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 12 17:43:44.315327 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:43:44.315334 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 17:43:44.315341 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 17:43:44.315349 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 17:43:44.315358 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:43:44.315365 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Sep 12 17:43:44.315372 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:43:44.315379 kernel: cpuidle: using governor menu Sep 12 17:43:44.315387 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 17:43:44.315395 kernel: ASID allocator initialised with 32768 entries Sep 12 17:43:44.315402 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:43:44.315409 kernel: Serial: AMBA PL011 UART driver Sep 12 17:43:44.315416 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 12 17:43:44.315425 kernel: Modules: 0 pages in range for non-PLT usage Sep 12 17:43:44.315432 kernel: Modules: 508992 pages in range for PLT usage Sep 12 17:43:44.315440 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:43:44.315447 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:43:44.315455 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 17:43:44.315462 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 17:43:44.315469 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:43:44.315477 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:43:44.315484 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 17:43:44.315493 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 17:43:44.315500 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:43:44.315507 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:43:44.315515 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:43:44.315522 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:43:44.315529 kernel: ACPI: Interpreter enabled Sep 12 17:43:44.315536 kernel: ACPI: Using GIC for interrupt routing Sep 12 17:43:44.315544 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 12 17:43:44.315551 kernel: printk: console [ttyAMA0] enabled Sep 12 17:43:44.315560 kernel: printk: bootconsole [pl11] disabled Sep 12 17:43:44.315567 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 12 17:43:44.315574 kernel: iommu: Default domain type: Translated Sep 12 17:43:44.315581 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 17:43:44.315589 kernel: efivars: Registered efivars operations Sep 12 17:43:44.315596 kernel: vgaarb: loaded Sep 12 17:43:44.315603 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 17:43:44.315610 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:43:44.315618 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:43:44.315626 kernel: pnp: PnP ACPI init Sep 12 17:43:44.315634 kernel: pnp: PnP ACPI: found 0 devices Sep 12 17:43:44.315641 kernel: NET: Registered PF_INET protocol family Sep 12 17:43:44.315649 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:43:44.315656 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:43:44.315663 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:43:44.315671 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:43:44.315678 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:43:44.315685 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:43:44.315694 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:43:44.315702 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:43:44.315709 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:43:44.315716 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:43:44.315723 kernel: kvm [1]: HYP mode not available Sep 12 17:43:44.315731 kernel: Initialise system trusted keyrings Sep 12 17:43:44.315749 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:43:44.315757 kernel: Key type asymmetric registered Sep 12 17:43:44.315764 kernel: Asymmetric key parser 'x509' registered Sep 12 17:43:44.315773 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 17:43:44.315780 kernel: io scheduler mq-deadline registered Sep 12 17:43:44.315788 kernel: io scheduler kyber registered Sep 12 17:43:44.315795 kernel: io scheduler bfq registered Sep 12 17:43:44.315802 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:43:44.315809 kernel: thunder_xcv, ver 1.0 Sep 12 17:43:44.315816 kernel: thunder_bgx, ver 1.0 Sep 12 17:43:44.315824 kernel: nicpf, ver 1.0 Sep 12 17:43:44.315831 kernel: nicvf, ver 1.0 Sep 12 17:43:44.315982 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 17:43:44.316058 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T17:43:43 UTC (1757699023) Sep 12 17:43:44.316068 kernel: efifb: probing for efifb Sep 12 17:43:44.316076 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 12 17:43:44.316083 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 12 17:43:44.316091 kernel: efifb: scrolling: redraw Sep 12 17:43:44.316098 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 17:43:44.316105 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 17:43:44.316114 kernel: fb0: EFI VGA frame buffer device Sep 12 17:43:44.316122 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 12 17:43:44.316129 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 17:43:44.316136 kernel: No ACPI PMU IRQ for CPU0 Sep 12 17:43:44.316143 kernel: No ACPI PMU IRQ for CPU1 Sep 12 17:43:44.316151 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Sep 12 17:43:44.316158 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 12 17:43:44.316165 kernel: watchdog: Hard watchdog permanently disabled Sep 12 17:43:44.316172 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:43:44.316181 kernel: Segment Routing with IPv6 Sep 12 17:43:44.316188 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:43:44.316195 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:43:44.316202 kernel: Key type dns_resolver registered Sep 12 17:43:44.316209 kernel: registered taskstats version 1 Sep 12 17:43:44.316217 kernel: Loading compiled-in X.509 certificates Sep 12 17:43:44.316224 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 2d576b5e69e6c5de2f731966fe8b55173c144d02' Sep 12 17:43:44.316231 kernel: Key type .fscrypt registered Sep 12 17:43:44.316238 kernel: Key type fscrypt-provisioning registered Sep 12 17:43:44.316247 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:43:44.316254 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:43:44.316262 kernel: ima: No architecture policies found Sep 12 17:43:44.316269 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 17:43:44.316276 kernel: clk: Disabling unused clocks Sep 12 17:43:44.316283 kernel: Freeing unused kernel memory: 39488K Sep 12 17:43:44.316291 kernel: Run /init as init process Sep 12 17:43:44.316298 kernel: with arguments: Sep 12 17:43:44.316305 kernel: /init Sep 12 17:43:44.316313 kernel: with environment: Sep 12 17:43:44.316320 kernel: HOME=/ Sep 12 17:43:44.316327 kernel: TERM=linux Sep 12 17:43:44.316335 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:43:44.316344 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:43:44.316353 systemd[1]: Detected virtualization microsoft. Sep 12 17:43:44.316361 systemd[1]: Detected architecture arm64. Sep 12 17:43:44.316369 systemd[1]: Running in initrd. Sep 12 17:43:44.316378 systemd[1]: No hostname configured, using default hostname. Sep 12 17:43:44.316386 systemd[1]: Hostname set to . Sep 12 17:43:44.316394 systemd[1]: Initializing machine ID from random generator. Sep 12 17:43:44.316401 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:43:44.316410 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:43:44.316418 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:43:44.316426 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:43:44.316434 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:43:44.316444 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:43:44.316453 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:43:44.316462 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:43:44.316470 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:43:44.316478 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:43:44.316486 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:43:44.316504 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:43:44.316513 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:43:44.316521 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:43:44.316528 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:43:44.316536 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:43:44.316544 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:43:44.316552 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:43:44.316560 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:43:44.316568 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:43:44.316577 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:43:44.316585 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:43:44.316594 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:43:44.316602 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:43:44.316610 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:43:44.316618 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:43:44.316626 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:43:44.316634 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:43:44.316642 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:43:44.316670 systemd-journald[217]: Collecting audit messages is disabled. Sep 12 17:43:44.316689 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:43:44.316698 systemd-journald[217]: Journal started Sep 12 17:43:44.316718 systemd-journald[217]: Runtime Journal (/run/log/journal/05789322bf8b4baf8f40a8e5dedf8d74) is 8.0M, max 78.5M, 70.5M free. Sep 12 17:43:44.328774 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:43:44.327851 systemd-modules-load[218]: Inserted module 'overlay' Sep 12 17:43:44.360759 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:43:44.360119 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:43:44.375017 kernel: Bridge firewalling registered Sep 12 17:43:44.369024 systemd-modules-load[218]: Inserted module 'br_netfilter' Sep 12 17:43:44.370362 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:43:44.382317 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:43:44.391148 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:43:44.403560 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:43:44.425984 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:43:44.440028 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:43:44.450897 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:43:44.478928 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:43:44.486365 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:43:44.500645 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:43:44.507146 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:43:44.522943 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:43:44.551951 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:43:44.564050 dracut-cmdline[249]: dracut-dracut-053 Sep 12 17:43:44.564050 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=1e63d3057914877efa0eb5f75703bd3a3d4c120bdf4a7ab97f41083e29183e56 Sep 12 17:43:44.568591 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:43:44.616956 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:43:44.629930 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:43:44.662176 systemd-resolved[260]: Positive Trust Anchors: Sep 12 17:43:44.662193 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:43:44.662226 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:43:44.664432 systemd-resolved[260]: Defaulting to hostname 'linux'. Sep 12 17:43:44.666922 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:43:44.673903 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:43:44.769768 kernel: SCSI subsystem initialized Sep 12 17:43:44.777766 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:43:44.787759 kernel: iscsi: registered transport (tcp) Sep 12 17:43:44.805761 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:43:44.805830 kernel: QLogic iSCSI HBA Driver Sep 12 17:43:44.844208 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:43:44.862866 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:43:44.893100 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:43:44.893139 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:43:44.899769 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:43:44.947765 kernel: raid6: neonx8 gen() 15745 MB/s Sep 12 17:43:44.967747 kernel: raid6: neonx4 gen() 15670 MB/s Sep 12 17:43:44.987744 kernel: raid6: neonx2 gen() 13239 MB/s Sep 12 17:43:45.008746 kernel: raid6: neonx1 gen() 10523 MB/s Sep 12 17:43:45.028744 kernel: raid6: int64x8 gen() 6960 MB/s Sep 12 17:43:45.048748 kernel: raid6: int64x4 gen() 7353 MB/s Sep 12 17:43:45.069745 kernel: raid6: int64x2 gen() 6133 MB/s Sep 12 17:43:45.093083 kernel: raid6: int64x1 gen() 5061 MB/s Sep 12 17:43:45.093103 kernel: raid6: using algorithm neonx8 gen() 15745 MB/s Sep 12 17:43:45.117179 kernel: raid6: .... xor() 12062 MB/s, rmw enabled Sep 12 17:43:45.117220 kernel: raid6: using neon recovery algorithm Sep 12 17:43:45.129421 kernel: xor: measuring software checksum speed Sep 12 17:43:45.129437 kernel: 8regs : 19778 MB/sec Sep 12 17:43:45.137857 kernel: 32regs : 18760 MB/sec Sep 12 17:43:45.137869 kernel: arm64_neon : 26280 MB/sec Sep 12 17:43:45.142156 kernel: xor: using function: arm64_neon (26280 MB/sec) Sep 12 17:43:45.193768 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:43:45.203056 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:43:45.220929 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:43:45.243578 systemd-udevd[436]: Using default interface naming scheme 'v255'. Sep 12 17:43:45.249159 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:43:45.275988 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:43:45.287909 dracut-pre-trigger[440]: rd.md=0: removing MD RAID activation Sep 12 17:43:45.314981 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:43:45.330295 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:43:45.370327 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:43:45.387952 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:43:45.415933 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:43:45.426479 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:43:45.445315 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:43:45.466218 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:43:45.499757 kernel: hv_vmbus: Vmbus version:5.3 Sep 12 17:43:45.500980 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:43:45.523523 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:43:45.540666 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 12 17:43:45.540689 kernel: hv_vmbus: registering driver hid_hyperv Sep 12 17:43:45.540703 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 12 17:43:45.529949 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:43:45.567879 kernel: hv_vmbus: registering driver hv_netvsc Sep 12 17:43:45.567900 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 12 17:43:45.567609 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:43:45.588248 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 12 17:43:45.588267 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 12 17:43:45.595085 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:43:45.607143 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 12 17:43:45.600625 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:43:45.628244 kernel: hv_vmbus: registering driver hv_storvsc Sep 12 17:43:45.621674 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:43:45.642802 kernel: scsi host0: storvsc_host_t Sep 12 17:43:45.642964 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 12 17:43:45.652635 kernel: scsi host1: storvsc_host_t Sep 12 17:43:45.654478 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:43:45.676829 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 12 17:43:45.664712 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:43:45.692403 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:43:45.692503 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:43:45.719020 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:43:45.747907 kernel: PTP clock support registered Sep 12 17:43:45.747938 kernel: hv_utils: Registering HyperV Utility Driver Sep 12 17:43:45.747948 kernel: hv_vmbus: registering driver hv_utils Sep 12 17:43:45.740975 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:43:46.177599 kernel: hv_netvsc 00224879-249a-0022-4879-249a00224879 eth0: VF slot 1 added Sep 12 17:43:46.177746 kernel: hv_utils: Heartbeat IC version 3.0 Sep 12 17:43:46.177758 kernel: hv_utils: Shutdown IC version 3.2 Sep 12 17:43:46.177768 kernel: hv_utils: TimeSync IC version 4.0 Sep 12 17:43:46.177776 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 12 17:43:46.165157 systemd-resolved[260]: Clock change detected. Flushing caches. Sep 12 17:43:46.178099 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:43:46.210095 kernel: hv_vmbus: registering driver hv_pci Sep 12 17:43:46.210119 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 17:43:46.210129 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 12 17:43:46.225257 kernel: hv_pci 14b82d14-8dab-4122-bf57-d3f9659b8c85: PCI VMBus probing: Using version 0x10004 Sep 12 17:43:46.243381 kernel: hv_pci 14b82d14-8dab-4122-bf57-d3f9659b8c85: PCI host bridge to bus 8dab:00 Sep 12 17:43:46.243558 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 12 17:43:46.243670 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 12 17:43:46.243762 kernel: pci_bus 8dab:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 12 17:43:46.233010 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:43:46.298313 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 12 17:43:46.298476 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 12 17:43:46.298578 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 12 17:43:46.298670 kernel: pci_bus 8dab:00: No busn resource found for root bus, will use [bus 00-ff] Sep 12 17:43:46.298771 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:43:46.298785 kernel: pci 8dab:00:02.0: [15b3:1018] type 00 class 0x020000 Sep 12 17:43:46.298807 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 12 17:43:46.298900 kernel: pci 8dab:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 12 17:43:46.309294 kernel: pci 8dab:00:02.0: enabling Extended Tags Sep 12 17:43:46.329249 kernel: pci 8dab:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 8dab:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Sep 12 17:43:46.340634 kernel: pci_bus 8dab:00: busn_res: [bus 00-ff] end is updated to 00 Sep 12 17:43:46.340847 kernel: pci 8dab:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 12 17:43:46.382403 kernel: mlx5_core 8dab:00:02.0: enabling device (0000 -> 0002) Sep 12 17:43:46.389243 kernel: mlx5_core 8dab:00:02.0: firmware version: 16.31.2424 Sep 12 17:43:46.669089 kernel: hv_netvsc 00224879-249a-0022-4879-249a00224879 eth0: VF registering: eth1 Sep 12 17:43:46.669328 kernel: mlx5_core 8dab:00:02.0 eth1: joined to eth0 Sep 12 17:43:46.683310 kernel: mlx5_core 8dab:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Sep 12 17:43:46.698276 kernel: mlx5_core 8dab:00:02.0 enP36267s1: renamed from eth1 Sep 12 17:43:46.823483 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 12 17:43:46.925274 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (497) Sep 12 17:43:46.941741 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 12 17:43:46.965677 kernel: BTRFS: device fsid 5a23a06a-00d4-4606-89bf-13e31a563129 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (505) Sep 12 17:43:46.980818 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 12 17:43:46.988101 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 12 17:43:47.012608 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 12 17:43:47.037492 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:43:47.066281 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:43:47.076256 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:43:47.086251 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:43:48.087255 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:43:48.088257 disk-uuid[605]: The operation has completed successfully. Sep 12 17:43:48.152267 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:43:48.152360 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:43:48.187382 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:43:48.200823 sh[718]: Success Sep 12 17:43:48.240264 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 12 17:43:48.582494 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:43:48.599365 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:43:48.609301 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:43:48.647009 kernel: BTRFS info (device dm-0): first mount of filesystem 5a23a06a-00d4-4606-89bf-13e31a563129 Sep 12 17:43:48.647067 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:43:48.654650 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:43:48.660160 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:43:48.664691 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:43:49.223076 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:43:49.228714 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:43:49.249502 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:43:49.262403 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:43:49.295773 kernel: BTRFS info (device sda6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:43:49.295794 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:43:49.295804 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:43:49.356283 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:43:49.365582 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:43:49.378279 kernel: BTRFS info (device sda6): last unmount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:43:49.384389 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:43:49.397460 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:43:49.405662 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:43:49.424775 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:43:49.462670 systemd-networkd[902]: lo: Link UP Sep 12 17:43:49.466439 systemd-networkd[902]: lo: Gained carrier Sep 12 17:43:49.468097 systemd-networkd[902]: Enumeration completed Sep 12 17:43:49.468473 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:43:49.475490 systemd[1]: Reached target network.target - Network. Sep 12 17:43:49.479376 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:43:49.479379 systemd-networkd[902]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:43:49.574353 kernel: mlx5_core 8dab:00:02.0 enP36267s1: Link up Sep 12 17:43:49.574590 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 12 17:43:49.649266 kernel: hv_netvsc 00224879-249a-0022-4879-249a00224879 eth0: Data path switched to VF: enP36267s1 Sep 12 17:43:49.649923 systemd-networkd[902]: enP36267s1: Link UP Sep 12 17:43:49.650006 systemd-networkd[902]: eth0: Link UP Sep 12 17:43:49.650096 systemd-networkd[902]: eth0: Gained carrier Sep 12 17:43:49.650106 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:43:49.662427 systemd-networkd[902]: enP36267s1: Gained carrier Sep 12 17:43:49.685277 systemd-networkd[902]: eth0: DHCPv4 address 10.200.20.46/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 12 17:43:50.648261 ignition[900]: Ignition 2.19.0 Sep 12 17:43:50.648272 ignition[900]: Stage: fetch-offline Sep 12 17:43:50.648314 ignition[900]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:43:50.656345 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:43:50.648322 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:43:50.671541 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:43:50.648431 ignition[900]: parsed url from cmdline: "" Sep 12 17:43:50.648434 ignition[900]: no config URL provided Sep 12 17:43:50.648439 ignition[900]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:43:50.648446 ignition[900]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:43:50.648451 ignition[900]: failed to fetch config: resource requires networking Sep 12 17:43:50.652173 ignition[900]: Ignition finished successfully Sep 12 17:43:50.715302 ignition[916]: Ignition 2.19.0 Sep 12 17:43:50.715308 ignition[916]: Stage: fetch Sep 12 17:43:50.715495 ignition[916]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:43:50.715504 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:43:50.715606 ignition[916]: parsed url from cmdline: "" Sep 12 17:43:50.715609 ignition[916]: no config URL provided Sep 12 17:43:50.715613 ignition[916]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:43:50.715619 ignition[916]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:43:50.715640 ignition[916]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 12 17:43:50.797763 ignition[916]: GET result: OK Sep 12 17:43:50.797831 ignition[916]: config has been read from IMDS userdata Sep 12 17:43:50.797874 ignition[916]: parsing config with SHA512: 111feba24a6a5549bd8317767a61fe94a10f4dc0dbd574616e8dceb0f4a726f19d3cc789498cd3742668ec09d1043104fcafa18a83950285d01532fadeeee718 Sep 12 17:43:50.801530 unknown[916]: fetched base config from "system" Sep 12 17:43:50.801885 ignition[916]: fetch: fetch complete Sep 12 17:43:50.801537 unknown[916]: fetched base config from "system" Sep 12 17:43:50.801889 ignition[916]: fetch: fetch passed Sep 12 17:43:50.801543 unknown[916]: fetched user config from "azure" Sep 12 17:43:50.801926 ignition[916]: Ignition finished successfully Sep 12 17:43:50.808069 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:43:50.818568 systemd-networkd[902]: eth0: Gained IPv6LL Sep 12 17:43:50.848726 ignition[923]: Ignition 2.19.0 Sep 12 17:43:50.831528 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:43:50.848732 ignition[923]: Stage: kargs Sep 12 17:43:50.851713 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:43:50.848915 ignition[923]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:43:50.848924 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:43:50.881531 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:43:50.849971 ignition[923]: kargs: kargs passed Sep 12 17:43:50.907524 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:43:50.850029 ignition[923]: Ignition finished successfully Sep 12 17:43:50.915445 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:43:50.902441 ignition[929]: Ignition 2.19.0 Sep 12 17:43:50.925060 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:43:50.902448 ignition[929]: Stage: disks Sep 12 17:43:50.938969 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:43:50.902664 ignition[929]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:43:50.948792 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:43:50.902673 ignition[929]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:43:50.960261 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:43:50.903774 ignition[929]: disks: disks passed Sep 12 17:43:50.984504 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:43:50.903818 ignition[929]: Ignition finished successfully Sep 12 17:43:51.085761 systemd-fsck[938]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 12 17:43:51.098085 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:43:51.117426 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:43:51.186273 kernel: EXT4-fs (sda9): mounted filesystem fc6c61a7-153d-4e7f-95c0-bffdb4824d71 r/w with ordered data mode. Quota mode: none. Sep 12 17:43:51.186493 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:43:51.191716 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:43:51.238316 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:43:51.265585 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (949) Sep 12 17:43:51.263220 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:43:51.271415 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 17:43:51.302592 kernel: BTRFS info (device sda6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:43:51.302615 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:43:51.289248 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:43:51.328472 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:43:51.289282 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:43:51.316717 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:43:51.342457 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:43:51.363807 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:43:51.365613 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:43:51.955623 coreos-metadata[951]: Sep 12 17:43:51.955 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 12 17:43:51.966106 coreos-metadata[951]: Sep 12 17:43:51.966 INFO Fetch successful Sep 12 17:43:51.966106 coreos-metadata[951]: Sep 12 17:43:51.966 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 12 17:43:51.984341 coreos-metadata[951]: Sep 12 17:43:51.980 INFO Fetch successful Sep 12 17:43:51.998947 coreos-metadata[951]: Sep 12 17:43:51.998 INFO wrote hostname ci-4081.3.6-a-ca65cd0ccc to /sysroot/etc/hostname Sep 12 17:43:52.009266 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:43:52.344407 initrd-setup-root[978]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:43:52.397285 initrd-setup-root[985]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:43:52.433681 initrd-setup-root[992]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:43:52.442521 initrd-setup-root[999]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:43:53.841083 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:43:53.856457 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:43:53.868502 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:43:53.886013 kernel: BTRFS info (device sda6): last unmount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:43:53.881261 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:43:53.911774 ignition[1067]: INFO : Ignition 2.19.0 Sep 12 17:43:53.917349 ignition[1067]: INFO : Stage: mount Sep 12 17:43:53.917349 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:43:53.917349 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:43:53.917349 ignition[1067]: INFO : mount: mount passed Sep 12 17:43:53.917349 ignition[1067]: INFO : Ignition finished successfully Sep 12 17:43:53.912958 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:43:53.924577 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:43:53.954467 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:43:53.974473 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:43:54.008249 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1078) Sep 12 17:43:54.022478 kernel: BTRFS info (device sda6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:43:54.022516 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:43:54.026862 kernel: BTRFS info (device sda6): using free space tree Sep 12 17:43:54.035249 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 17:43:54.036808 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:43:54.060655 ignition[1095]: INFO : Ignition 2.19.0 Sep 12 17:43:54.060655 ignition[1095]: INFO : Stage: files Sep 12 17:43:54.060655 ignition[1095]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:43:54.060655 ignition[1095]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:43:54.060655 ignition[1095]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:43:54.122985 ignition[1095]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:43:54.122985 ignition[1095]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:43:54.257727 ignition[1095]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:43:54.265366 ignition[1095]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:43:54.265366 ignition[1095]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:43:54.258100 unknown[1095]: wrote ssh authorized keys file for user: core Sep 12 17:43:54.296385 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 12 17:43:54.307778 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 12 17:43:54.336383 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:43:54.570873 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 12 17:43:54.581992 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:43:54.581992 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:43:54.581992 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:43:54.581992 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:43:54.581992 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:43:54.581992 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:43:54.581992 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:43:54.581992 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:43:54.661299 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:43:54.661299 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:43:54.661299 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:43:54.661299 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:43:54.661299 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:43:54.661299 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 12 17:43:55.124298 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:43:55.363412 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:43:55.363412 ignition[1095]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:43:55.421190 ignition[1095]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:43:55.432581 ignition[1095]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:43:55.432581 ignition[1095]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:43:55.432581 ignition[1095]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:43:55.432581 ignition[1095]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:43:55.432581 ignition[1095]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:43:55.432581 ignition[1095]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:43:55.432581 ignition[1095]: INFO : files: files passed Sep 12 17:43:55.432581 ignition[1095]: INFO : Ignition finished successfully Sep 12 17:43:55.433278 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:43:55.471472 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:43:55.489391 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:43:55.505435 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:43:55.507595 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:43:55.554061 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:43:55.554061 initrd-setup-root-after-ignition[1123]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:43:55.577836 initrd-setup-root-after-ignition[1127]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:43:55.555285 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:43:55.569325 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:43:55.606480 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:43:55.636400 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:43:55.636516 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:43:55.648753 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:43:55.661000 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:43:55.671711 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:43:55.686500 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:43:55.708778 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:43:55.725491 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:43:55.743669 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:43:55.750517 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:43:55.762831 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:43:55.774012 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:43:55.774141 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:43:55.790071 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:43:55.796470 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:43:55.807959 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:43:55.819242 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:43:55.830023 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:43:55.841768 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:43:55.853283 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:43:55.865789 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:43:55.876449 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:43:55.888927 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:43:55.898874 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:43:55.899004 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:43:55.913950 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:43:55.919966 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:43:55.931945 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:43:55.937096 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:43:55.944372 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:43:55.944500 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:43:55.961433 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:43:55.961550 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:43:55.968356 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:43:55.968446 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:43:55.978662 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 17:43:55.978754 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:43:56.004498 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:43:56.060743 ignition[1147]: INFO : Ignition 2.19.0 Sep 12 17:43:56.060743 ignition[1147]: INFO : Stage: umount Sep 12 17:43:56.060743 ignition[1147]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:43:56.060743 ignition[1147]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:43:56.060743 ignition[1147]: INFO : umount: umount passed Sep 12 17:43:56.060743 ignition[1147]: INFO : Ignition finished successfully Sep 12 17:43:56.016431 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:43:56.016590 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:43:56.063143 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:43:56.077427 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:43:56.077605 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:43:56.089357 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:43:56.089452 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:43:56.111471 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:43:56.111575 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:43:56.124903 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:43:56.125879 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:43:56.126143 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:43:56.138121 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:43:56.138183 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:43:56.149917 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:43:56.149978 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:43:56.160711 systemd[1]: Stopped target network.target - Network. Sep 12 17:43:56.172481 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:43:56.172556 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:43:56.184875 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:43:56.196295 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:43:56.206174 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:43:56.213257 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:43:56.224989 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:43:56.235001 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:43:56.235072 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:43:56.245622 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:43:56.245683 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:43:56.256316 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:43:56.256371 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:43:56.266838 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:43:56.266884 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:43:56.277981 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:43:56.293278 systemd-networkd[902]: eth0: DHCPv6 lease lost Sep 12 17:43:56.294314 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:43:56.311278 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:43:56.311386 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:43:56.318134 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:43:56.318223 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:43:56.326049 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:43:56.326165 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:43:56.340068 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:43:56.340124 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:43:56.364451 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:43:56.582277 kernel: hv_netvsc 00224879-249a-0022-4879-249a00224879 eth0: Data path switched from VF: enP36267s1 Sep 12 17:43:56.374029 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:43:56.374106 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:43:56.385620 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:43:56.385682 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:43:56.401481 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:43:56.401534 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:43:56.413387 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:43:56.413435 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:43:56.426857 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:43:56.461666 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:43:56.461754 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:43:56.471938 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:43:56.472070 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:43:56.486307 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:43:56.486395 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:43:56.496684 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:43:56.496719 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:43:56.508353 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:43:56.508411 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:43:56.525908 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:43:56.525962 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:43:56.536499 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:43:56.536552 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:43:56.556807 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:43:56.556865 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:43:56.588455 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:43:56.605543 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:43:56.605631 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:43:56.617750 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:43:56.617801 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:43:56.629604 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:43:56.629700 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:43:56.704484 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:43:56.704637 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:43:56.714645 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:43:56.735429 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:43:56.754312 systemd[1]: Switching root. Sep 12 17:43:56.869874 systemd-journald[217]: Journal stopped Sep 12 17:44:05.945773 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Sep 12 17:44:05.945796 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:44:05.945807 kernel: SELinux: policy capability open_perms=1 Sep 12 17:44:05.945816 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:44:05.945824 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:44:05.945831 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:44:05.945840 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:44:05.945848 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:44:05.945856 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:44:05.945864 kernel: audit: type=1403 audit(1757699038.848:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:44:05.945874 systemd[1]: Successfully loaded SELinux policy in 157.864ms. Sep 12 17:44:05.945884 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.101ms. Sep 12 17:44:05.945894 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:44:05.945903 systemd[1]: Detected virtualization microsoft. Sep 12 17:44:05.945912 systemd[1]: Detected architecture arm64. Sep 12 17:44:05.945922 systemd[1]: Detected first boot. Sep 12 17:44:05.945932 systemd[1]: Hostname set to . Sep 12 17:44:05.945941 systemd[1]: Initializing machine ID from random generator. Sep 12 17:44:05.945951 zram_generator::config[1190]: No configuration found. Sep 12 17:44:05.945961 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:44:05.945970 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:44:05.945980 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:44:05.945989 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:44:05.945999 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:44:05.946008 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:44:05.946017 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:44:05.946026 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:44:05.946035 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:44:05.946046 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:44:05.946055 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:44:05.946064 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:44:05.946073 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:44:05.946083 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:44:05.946092 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:44:05.946101 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:44:05.946110 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:44:05.946120 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:44:05.946131 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 12 17:44:05.946140 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:44:05.946151 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:44:05.946162 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:44:05.946171 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:44:05.946181 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:44:05.946190 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:44:05.946201 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:44:05.946211 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:44:05.946220 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:44:05.946240 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:44:05.946254 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:44:05.946264 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:44:05.946274 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:44:05.946286 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:44:05.946296 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:44:05.946305 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:44:05.946315 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:44:05.946324 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:44:05.946334 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:44:05.946345 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:44:05.946359 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:44:05.946369 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:44:05.946380 systemd[1]: Reached target machines.target - Containers. Sep 12 17:44:05.946390 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:44:05.946399 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:44:05.946409 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:44:05.946418 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:44:05.946430 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:44:05.946440 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:44:05.946449 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:44:05.946459 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:44:05.946468 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:44:05.946478 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:44:05.946488 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:44:05.946497 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:44:05.946507 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:44:05.946518 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:44:05.946527 kernel: fuse: init (API version 7.39) Sep 12 17:44:05.946536 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:44:05.946545 kernel: loop: module loaded Sep 12 17:44:05.946554 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:44:05.946563 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:44:05.946573 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:44:05.946600 systemd-journald[1272]: Collecting audit messages is disabled. Sep 12 17:44:05.946622 systemd-journald[1272]: Journal started Sep 12 17:44:05.946642 systemd-journald[1272]: Runtime Journal (/run/log/journal/f0f7e0524c324001b06a97a1a8cf0b3a) is 8.0M, max 78.5M, 70.5M free. Sep 12 17:44:04.784667 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:44:04.975439 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 12 17:44:04.975793 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:44:04.976111 systemd[1]: systemd-journald.service: Consumed 3.271s CPU time. Sep 12 17:44:05.972914 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:44:05.972982 kernel: ACPI: bus type drm_connector registered Sep 12 17:44:05.972995 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:44:05.986246 systemd[1]: Stopped verity-setup.service. Sep 12 17:44:06.004265 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:44:06.005009 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:44:06.010799 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:44:06.016771 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:44:06.022366 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:44:06.028147 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:44:06.034502 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:44:06.039815 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:44:06.046636 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:44:06.054746 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:44:06.054876 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:44:06.061768 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:44:06.061892 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:44:06.068804 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:44:06.068932 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:44:06.074843 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:44:06.074969 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:44:06.082126 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:44:06.082262 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:44:06.089008 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:44:06.089136 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:44:06.095963 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:44:06.103106 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:44:06.110782 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:44:06.120271 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:44:06.134517 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:44:06.147328 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:44:06.154748 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:44:06.160829 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:44:06.160870 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:44:06.167444 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:44:06.175034 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:44:06.182190 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:44:06.187717 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:44:06.224363 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:44:06.231186 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:44:06.237476 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:44:06.239512 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:44:06.246481 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:44:06.247436 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:44:06.254444 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:44:06.264903 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:44:06.272434 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:44:06.280858 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:44:06.288627 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:44:06.297474 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:44:06.305365 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:44:06.315735 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:44:06.326874 systemd-journald[1272]: Time spent on flushing to /var/log/journal/f0f7e0524c324001b06a97a1a8cf0b3a is 12.275ms for 901 entries. Sep 12 17:44:06.326874 systemd-journald[1272]: System Journal (/var/log/journal/f0f7e0524c324001b06a97a1a8cf0b3a) is 8.0M, max 2.6G, 2.6G free. Sep 12 17:44:06.374033 systemd-journald[1272]: Received client request to flush runtime journal. Sep 12 17:44:06.335570 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:44:06.344480 udevadm[1327]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 12 17:44:06.381349 kernel: loop0: detected capacity change from 0 to 114328 Sep 12 17:44:06.382303 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:44:06.424504 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:44:06.425137 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:44:06.456282 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:44:06.899561 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:44:06.910608 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:44:06.917386 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:44:06.973258 kernel: loop1: detected capacity change from 0 to 211168 Sep 12 17:44:07.037271 kernel: loop2: detected capacity change from 0 to 31320 Sep 12 17:44:07.101879 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Sep 12 17:44:07.101893 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Sep 12 17:44:07.106265 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:44:07.656262 kernel: loop3: detected capacity change from 0 to 114432 Sep 12 17:44:08.200258 kernel: loop4: detected capacity change from 0 to 114328 Sep 12 17:44:08.214326 kernel: loop5: detected capacity change from 0 to 211168 Sep 12 17:44:08.234284 kernel: loop6: detected capacity change from 0 to 31320 Sep 12 17:44:08.248264 kernel: loop7: detected capacity change from 0 to 114432 Sep 12 17:44:08.270593 (sd-merge)[1348]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 12 17:44:08.271023 (sd-merge)[1348]: Merged extensions into '/usr'. Sep 12 17:44:08.275425 systemd[1]: Reloading requested from client PID 1324 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:44:08.275717 systemd[1]: Reloading... Sep 12 17:44:08.344275 zram_generator::config[1370]: No configuration found. Sep 12 17:44:08.483021 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:44:08.540466 systemd[1]: Reloading finished in 264 ms. Sep 12 17:44:08.572883 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:44:08.580008 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:44:08.593410 systemd[1]: Starting ensure-sysext.service... Sep 12 17:44:08.598443 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:44:08.608468 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:44:08.637474 systemd-udevd[1432]: Using default interface naming scheme 'v255'. Sep 12 17:44:08.693807 systemd-tmpfiles[1431]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:44:08.694083 systemd-tmpfiles[1431]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:44:08.694756 systemd-tmpfiles[1431]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:44:08.694973 systemd-tmpfiles[1431]: ACLs are not supported, ignoring. Sep 12 17:44:08.695016 systemd-tmpfiles[1431]: ACLs are not supported, ignoring. Sep 12 17:44:08.697003 systemd[1]: Reloading requested from client PID 1430 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:44:08.697019 systemd[1]: Reloading... Sep 12 17:44:08.734434 systemd-tmpfiles[1431]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:44:08.734444 systemd-tmpfiles[1431]: Skipping /boot Sep 12 17:44:08.746178 systemd-tmpfiles[1431]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:44:08.746905 systemd-tmpfiles[1431]: Skipping /boot Sep 12 17:44:08.777299 zram_generator::config[1460]: No configuration found. Sep 12 17:44:08.883443 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:44:08.940132 systemd[1]: Reloading finished in 242 ms. Sep 12 17:44:08.962706 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:44:08.980562 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:44:09.034567 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:44:09.042646 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:44:09.051938 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:44:09.060663 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:44:09.072144 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:44:09.073608 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:44:09.082584 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:44:09.093526 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:44:09.101633 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:44:09.102567 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:44:09.104793 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:44:09.111757 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:44:09.112147 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:44:09.119714 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:44:09.119851 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:44:09.142077 systemd[1]: Finished ensure-sysext.service. Sep 12 17:44:09.147914 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:44:09.157090 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Sep 12 17:44:09.163461 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:44:09.168414 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:44:09.175376 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:44:09.183435 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:44:09.192362 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:44:09.198847 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:44:09.198920 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:44:09.206583 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:44:09.213331 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:44:09.213512 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:44:09.222347 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:44:09.222499 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:44:09.229460 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:44:09.229601 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:44:09.236860 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:44:09.237958 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:44:09.247670 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:44:09.247765 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:44:09.272225 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:44:09.376083 augenrules[1558]: No rules Sep 12 17:44:09.377726 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:44:09.430843 systemd-resolved[1527]: Positive Trust Anchors: Sep 12 17:44:09.430865 systemd-resolved[1527]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:44:09.430898 systemd-resolved[1527]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:44:09.469444 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:44:09.492706 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:44:09.503325 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:44:09.532077 systemd-resolved[1527]: Using system hostname 'ci-4081.3.6-a-ca65cd0ccc'. Sep 12 17:44:09.536180 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:44:09.547734 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:44:09.559696 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 12 17:44:09.639731 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Sep 12 17:44:09.663262 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:44:09.680287 kernel: hv_vmbus: registering driver hv_balloon Sep 12 17:44:09.687712 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:44:09.690404 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 12 17:44:09.690474 kernel: hv_balloon: Memory hot add disabled on ARM64 Sep 12 17:44:09.704909 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:44:09.707301 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:44:09.726488 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:44:09.744269 kernel: hv_vmbus: registering driver hyperv_fb Sep 12 17:44:09.757776 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 12 17:44:09.757877 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 12 17:44:09.764390 kernel: Console: switching to colour dummy device 80x25 Sep 12 17:44:09.767259 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 17:44:09.778286 systemd-networkd[1576]: lo: Link UP Sep 12 17:44:09.778295 systemd-networkd[1576]: lo: Gained carrier Sep 12 17:44:09.780134 systemd-networkd[1576]: Enumeration completed Sep 12 17:44:09.780330 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:44:09.781507 systemd-networkd[1576]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:44:09.781610 systemd-networkd[1576]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:44:09.790062 systemd[1]: Reached target network.target - Network. Sep 12 17:44:09.803931 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:44:09.810884 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:44:09.811136 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:44:09.832442 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:44:09.860545 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1566) Sep 12 17:44:09.886197 kernel: mlx5_core 8dab:00:02.0 enP36267s1: Link up Sep 12 17:44:09.886657 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 12 17:44:09.905204 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 12 17:44:09.917404 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:44:09.932261 kernel: hv_netvsc 00224879-249a-0022-4879-249a00224879 eth0: Data path switched to VF: enP36267s1 Sep 12 17:44:09.933275 systemd-networkd[1576]: enP36267s1: Link UP Sep 12 17:44:09.933375 systemd-networkd[1576]: eth0: Link UP Sep 12 17:44:09.933378 systemd-networkd[1576]: eth0: Gained carrier Sep 12 17:44:09.933393 systemd-networkd[1576]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:44:09.941656 systemd-networkd[1576]: enP36267s1: Gained carrier Sep 12 17:44:09.949303 systemd-networkd[1576]: eth0: DHCPv4 address 10.200.20.46/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 12 17:44:10.013671 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:44:10.041175 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:44:10.055384 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:44:10.196270 lvm[1657]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:44:10.243037 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:44:10.251898 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:44:10.265422 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:44:10.278183 lvm[1659]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:44:10.307762 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:44:11.417294 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:44:11.487373 systemd-networkd[1576]: eth0: Gained IPv6LL Sep 12 17:44:11.490068 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:44:11.497620 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:44:12.209432 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:44:12.219262 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:44:16.526271 ldconfig[1319]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:44:16.539259 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:44:16.550504 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:44:16.580201 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:44:16.586968 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:44:16.593340 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:44:16.599953 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:44:16.606947 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:44:16.613196 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:44:16.620156 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:44:16.627432 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:44:16.627472 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:44:16.632354 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:44:16.654285 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:44:16.662764 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:44:16.675256 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:44:16.682098 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:44:16.689546 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:44:16.695638 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:44:16.701736 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:44:16.701779 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:44:16.742345 systemd[1]: Starting chronyd.service - NTP client/server... Sep 12 17:44:16.750392 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:44:16.767101 (chronyd)[1671]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Sep 12 17:44:16.771387 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:44:16.780219 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:44:16.788174 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:44:16.795274 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:44:16.801183 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:44:16.801368 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Sep 12 17:44:16.802725 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 12 17:44:16.808863 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 12 17:44:16.811792 KVP[1679]: KVP starting; pid is:1679 Sep 12 17:44:16.813420 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:44:16.822262 chronyd[1682]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Sep 12 17:44:16.827116 kernel: hv_utils: KVP IC version 4.0 Sep 12 17:44:16.824032 KVP[1679]: KVP LIC Version: 3.1 Sep 12 17:44:16.827740 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:44:16.834100 jq[1677]: false Sep 12 17:44:16.837940 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:44:16.845319 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:44:16.853206 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:44:16.862209 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:44:16.875510 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:44:16.884041 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:44:16.884560 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:44:16.889398 chronyd[1682]: Timezone right/UTC failed leap second check, ignoring Sep 12 17:44:16.889600 chronyd[1682]: Loaded seccomp filter (level 2) Sep 12 17:44:16.890439 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:44:16.895553 extend-filesystems[1678]: Found loop4 Sep 12 17:44:16.895553 extend-filesystems[1678]: Found loop5 Sep 12 17:44:16.895553 extend-filesystems[1678]: Found loop6 Sep 12 17:44:16.895553 extend-filesystems[1678]: Found loop7 Sep 12 17:44:16.895553 extend-filesystems[1678]: Found sda Sep 12 17:44:16.895553 extend-filesystems[1678]: Found sda1 Sep 12 17:44:16.895553 extend-filesystems[1678]: Found sda2 Sep 12 17:44:16.895553 extend-filesystems[1678]: Found sda3 Sep 12 17:44:16.895553 extend-filesystems[1678]: Found usr Sep 12 17:44:16.895553 extend-filesystems[1678]: Found sda4 Sep 12 17:44:16.895553 extend-filesystems[1678]: Found sda6 Sep 12 17:44:16.895553 extend-filesystems[1678]: Found sda7 Sep 12 17:44:16.895553 extend-filesystems[1678]: Found sda9 Sep 12 17:44:16.895553 extend-filesystems[1678]: Checking size of /dev/sda9 Sep 12 17:44:16.900008 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:44:17.059207 extend-filesystems[1678]: Old size kept for /dev/sda9 Sep 12 17:44:17.059207 extend-filesystems[1678]: Found sr0 Sep 12 17:44:17.094174 update_engine[1694]: I20250912 17:44:17.019166 1694 main.cc:92] Flatcar Update Engine starting Sep 12 17:44:16.918860 systemd[1]: Started chronyd.service - NTP client/server. Sep 12 17:44:17.094617 jq[1696]: true Sep 12 17:44:16.934614 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:44:16.934808 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:44:16.937180 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:44:17.094986 tar[1706]: linux-arm64/LICENSE Sep 12 17:44:17.094986 tar[1706]: linux-arm64/helm Sep 12 17:44:16.937979 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:44:17.095285 jq[1710]: true Sep 12 17:44:16.960355 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:44:16.960541 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:44:16.984690 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:44:16.984942 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:44:17.018132 (ntainerd)[1714]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:44:17.034948 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:44:17.083864 systemd-logind[1692]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:44:17.084071 systemd-logind[1692]: New seat seat0. Sep 12 17:44:17.086756 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:44:17.194890 dbus-daemon[1674]: [system] SELinux support is enabled Sep 12 17:44:17.196402 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:44:17.237979 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1756) Sep 12 17:44:17.236749 dbus-daemon[1674]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 17:44:17.214838 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:44:17.238205 update_engine[1694]: I20250912 17:44:17.223628 1694 update_check_scheduler.cc:74] Next update check in 3m8s Sep 12 17:44:17.216136 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:44:17.224111 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:44:17.224131 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:44:17.236710 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:44:17.259515 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:44:17.277155 bash[1754]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:44:17.276782 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:44:17.287117 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 17:44:17.328759 coreos-metadata[1673]: Sep 12 17:44:17.328 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 12 17:44:17.331933 coreos-metadata[1673]: Sep 12 17:44:17.330 INFO Fetch successful Sep 12 17:44:17.332100 coreos-metadata[1673]: Sep 12 17:44:17.332 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 12 17:44:17.386403 coreos-metadata[1673]: Sep 12 17:44:17.386 INFO Fetch successful Sep 12 17:44:17.386731 coreos-metadata[1673]: Sep 12 17:44:17.386 INFO Fetching http://168.63.129.16/machine/068b7155-f165-4e11-82b9-b1ae90874740/26fb241a%2D1723%2D4dfa%2D985d%2D8e0dece0c23f.%5Fci%2D4081.3.6%2Da%2Dca65cd0ccc?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 12 17:44:17.391424 coreos-metadata[1673]: Sep 12 17:44:17.391 INFO Fetch successful Sep 12 17:44:17.391577 coreos-metadata[1673]: Sep 12 17:44:17.391 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 12 17:44:17.405042 coreos-metadata[1673]: Sep 12 17:44:17.404 INFO Fetch successful Sep 12 17:44:17.450543 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:44:17.457998 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:44:17.580798 locksmithd[1769]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:44:17.757509 sshd_keygen[1707]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:44:17.800157 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:44:17.816623 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:44:17.825509 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 12 17:44:17.835803 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:44:17.836330 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:44:17.841434 tar[1706]: linux-arm64/README.md Sep 12 17:44:17.857517 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:44:17.871765 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:44:17.895460 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 12 17:44:17.904902 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:44:17.920657 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:44:17.933551 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 12 17:44:17.941992 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:44:18.057379 containerd[1714]: time="2025-09-12T17:44:18.057298100Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:44:18.106179 containerd[1714]: time="2025-09-12T17:44:18.106065380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:44:18.115060 containerd[1714]: time="2025-09-12T17:44:18.115009700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:44:18.116028 containerd[1714]: time="2025-09-12T17:44:18.115999860Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:44:18.116142 containerd[1714]: time="2025-09-12T17:44:18.116128940Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:44:18.116394 containerd[1714]: time="2025-09-12T17:44:18.116369980Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:44:18.116472 containerd[1714]: time="2025-09-12T17:44:18.116458780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:44:18.116592 containerd[1714]: time="2025-09-12T17:44:18.116575860Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:44:18.116644 containerd[1714]: time="2025-09-12T17:44:18.116631860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:44:18.117415 containerd[1714]: time="2025-09-12T17:44:18.117392940Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:44:18.117505 containerd[1714]: time="2025-09-12T17:44:18.117491740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:44:18.117559 containerd[1714]: time="2025-09-12T17:44:18.117546060Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:44:18.117609 containerd[1714]: time="2025-09-12T17:44:18.117597820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:44:18.117745 containerd[1714]: time="2025-09-12T17:44:18.117730940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:44:18.118025 containerd[1714]: time="2025-09-12T17:44:18.118007540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:44:18.118403 containerd[1714]: time="2025-09-12T17:44:18.118383060Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:44:18.120591 containerd[1714]: time="2025-09-12T17:44:18.120269900Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:44:18.120591 containerd[1714]: time="2025-09-12T17:44:18.120401780Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:44:18.120591 containerd[1714]: time="2025-09-12T17:44:18.120443980Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:44:18.138075 containerd[1714]: time="2025-09-12T17:44:18.138026660Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:44:18.138315 containerd[1714]: time="2025-09-12T17:44:18.138299180Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:44:18.138380 containerd[1714]: time="2025-09-12T17:44:18.138367940Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:44:18.138463 containerd[1714]: time="2025-09-12T17:44:18.138450500Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:44:18.138530 containerd[1714]: time="2025-09-12T17:44:18.138517820Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:44:18.138745 containerd[1714]: time="2025-09-12T17:44:18.138729740Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:44:18.139078 containerd[1714]: time="2025-09-12T17:44:18.139061060Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:44:18.139288 containerd[1714]: time="2025-09-12T17:44:18.139266460Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:44:18.139353 containerd[1714]: time="2025-09-12T17:44:18.139341740Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:44:18.139416 containerd[1714]: time="2025-09-12T17:44:18.139404220Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:44:18.139514 containerd[1714]: time="2025-09-12T17:44:18.139501260Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:44:18.139573 containerd[1714]: time="2025-09-12T17:44:18.139556900Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:44:18.139623 containerd[1714]: time="2025-09-12T17:44:18.139612740Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:44:18.139680 containerd[1714]: time="2025-09-12T17:44:18.139669020Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:44:18.139731 containerd[1714]: time="2025-09-12T17:44:18.139720100Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:44:18.139786 containerd[1714]: time="2025-09-12T17:44:18.139775140Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:44:18.139842 containerd[1714]: time="2025-09-12T17:44:18.139830300Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:44:18.139908 containerd[1714]: time="2025-09-12T17:44:18.139895420Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:44:18.139967 containerd[1714]: time="2025-09-12T17:44:18.139954660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:44:18.140018 containerd[1714]: time="2025-09-12T17:44:18.140006540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:44:18.140086 containerd[1714]: time="2025-09-12T17:44:18.140074660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:44:18.140403 containerd[1714]: time="2025-09-12T17:44:18.140132620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:44:18.140403 containerd[1714]: time="2025-09-12T17:44:18.140150420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:44:18.140403 containerd[1714]: time="2025-09-12T17:44:18.140164460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:44:18.140403 containerd[1714]: time="2025-09-12T17:44:18.140176220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:44:18.140403 containerd[1714]: time="2025-09-12T17:44:18.140190460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:44:18.140403 containerd[1714]: time="2025-09-12T17:44:18.140203820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:44:18.140403 containerd[1714]: time="2025-09-12T17:44:18.140220420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:44:18.140403 containerd[1714]: time="2025-09-12T17:44:18.140246500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:44:18.140403 containerd[1714]: time="2025-09-12T17:44:18.140262380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:44:18.140403 containerd[1714]: time="2025-09-12T17:44:18.140275260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:44:18.140403 containerd[1714]: time="2025-09-12T17:44:18.140299300Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:44:18.140403 containerd[1714]: time="2025-09-12T17:44:18.140328540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:44:18.140403 containerd[1714]: time="2025-09-12T17:44:18.140341620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:44:18.140403 containerd[1714]: time="2025-09-12T17:44:18.140352580Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:44:18.140738 containerd[1714]: time="2025-09-12T17:44:18.140683780Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:44:18.140738 containerd[1714]: time="2025-09-12T17:44:18.140709780Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:44:18.142255 containerd[1714]: time="2025-09-12T17:44:18.140721860Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:44:18.142255 containerd[1714]: time="2025-09-12T17:44:18.140855020Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:44:18.142255 containerd[1714]: time="2025-09-12T17:44:18.140866540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:44:18.142255 containerd[1714]: time="2025-09-12T17:44:18.140879540Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:44:18.142255 containerd[1714]: time="2025-09-12T17:44:18.140889700Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:44:18.142255 containerd[1714]: time="2025-09-12T17:44:18.140901420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:44:18.142413 containerd[1714]: time="2025-09-12T17:44:18.141184700Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:44:18.142413 containerd[1714]: time="2025-09-12T17:44:18.141262580Z" level=info msg="Connect containerd service" Sep 12 17:44:18.142413 containerd[1714]: time="2025-09-12T17:44:18.141299300Z" level=info msg="using legacy CRI server" Sep 12 17:44:18.142413 containerd[1714]: time="2025-09-12T17:44:18.141316420Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:44:18.142413 containerd[1714]: time="2025-09-12T17:44:18.141404740Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:44:18.142413 containerd[1714]: time="2025-09-12T17:44:18.142029380Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:44:18.142763 containerd[1714]: time="2025-09-12T17:44:18.142717140Z" level=info msg="Start subscribing containerd event" Sep 12 17:44:18.142860 containerd[1714]: time="2025-09-12T17:44:18.142847500Z" level=info msg="Start recovering state" Sep 12 17:44:18.142995 containerd[1714]: time="2025-09-12T17:44:18.142982220Z" level=info msg="Start event monitor" Sep 12 17:44:18.143056 containerd[1714]: time="2025-09-12T17:44:18.143045220Z" level=info msg="Start snapshots syncer" Sep 12 17:44:18.143117 containerd[1714]: time="2025-09-12T17:44:18.143106060Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:44:18.143162 containerd[1714]: time="2025-09-12T17:44:18.143151740Z" level=info msg="Start streaming server" Sep 12 17:44:18.143469 containerd[1714]: time="2025-09-12T17:44:18.143452540Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:44:18.143604 containerd[1714]: time="2025-09-12T17:44:18.143590940Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:44:18.143841 containerd[1714]: time="2025-09-12T17:44:18.143817260Z" level=info msg="containerd successfully booted in 0.087290s" Sep 12 17:44:18.143920 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:44:18.191868 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:44:18.199275 (kubelet)[1833]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:44:18.202746 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:44:18.213313 systemd[1]: Startup finished in 656ms (kernel) + 14.546s (initrd) + 19.522s (userspace) = 34.725s. Sep 12 17:44:18.725361 kubelet[1833]: E0912 17:44:18.725302 1833 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:44:18.728156 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:44:18.728344 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:44:18.953109 login[1823]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Sep 12 17:44:18.971783 login[1824]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:18.980456 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:44:18.991529 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:44:18.993765 systemd-logind[1692]: New session 1 of user core. Sep 12 17:44:19.033433 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:44:19.047588 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:44:19.066509 (systemd)[1847]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:44:19.394713 systemd[1847]: Queued start job for default target default.target. Sep 12 17:44:19.402179 systemd[1847]: Created slice app.slice - User Application Slice. Sep 12 17:44:19.402210 systemd[1847]: Reached target paths.target - Paths. Sep 12 17:44:19.402223 systemd[1847]: Reached target timers.target - Timers. Sep 12 17:44:19.403436 systemd[1847]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:44:19.414970 systemd[1847]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:44:19.415084 systemd[1847]: Reached target sockets.target - Sockets. Sep 12 17:44:19.415097 systemd[1847]: Reached target basic.target - Basic System. Sep 12 17:44:19.415150 systemd[1847]: Reached target default.target - Main User Target. Sep 12 17:44:19.415178 systemd[1847]: Startup finished in 342ms. Sep 12 17:44:19.415256 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:44:19.426441 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:44:19.953494 login[1823]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:19.958049 systemd-logind[1692]: New session 2 of user core. Sep 12 17:44:19.963409 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:44:20.018574 waagent[1821]: 2025-09-12T17:44:20.018475Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Sep 12 17:44:20.024340 waagent[1821]: 2025-09-12T17:44:20.024261Z INFO Daemon Daemon OS: flatcar 4081.3.6 Sep 12 17:44:20.029215 waagent[1821]: 2025-09-12T17:44:20.028973Z INFO Daemon Daemon Python: 3.11.9 Sep 12 17:44:20.033971 waagent[1821]: 2025-09-12T17:44:20.033588Z INFO Daemon Daemon Run daemon Sep 12 17:44:20.038180 waagent[1821]: 2025-09-12T17:44:20.038121Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Sep 12 17:44:20.048083 waagent[1821]: 2025-09-12T17:44:20.048008Z INFO Daemon Daemon Using waagent for provisioning Sep 12 17:44:20.054209 waagent[1821]: 2025-09-12T17:44:20.054156Z INFO Daemon Daemon Activate resource disk Sep 12 17:44:20.059660 waagent[1821]: 2025-09-12T17:44:20.059591Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 12 17:44:20.073876 waagent[1821]: 2025-09-12T17:44:20.073805Z INFO Daemon Daemon Found device: None Sep 12 17:44:20.079326 waagent[1821]: 2025-09-12T17:44:20.079261Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 12 17:44:20.087865 waagent[1821]: 2025-09-12T17:44:20.087803Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 12 17:44:20.101398 waagent[1821]: 2025-09-12T17:44:20.101330Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 12 17:44:20.107431 waagent[1821]: 2025-09-12T17:44:20.107370Z INFO Daemon Daemon Running default provisioning handler Sep 12 17:44:20.120180 waagent[1821]: 2025-09-12T17:44:20.120091Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 12 17:44:20.134530 waagent[1821]: 2025-09-12T17:44:20.134464Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 12 17:44:20.145607 waagent[1821]: 2025-09-12T17:44:20.145541Z INFO Daemon Daemon cloud-init is enabled: False Sep 12 17:44:20.151429 waagent[1821]: 2025-09-12T17:44:20.151357Z INFO Daemon Daemon Copying ovf-env.xml Sep 12 17:44:20.273015 waagent[1821]: 2025-09-12T17:44:20.272918Z INFO Daemon Daemon Successfully mounted dvd Sep 12 17:44:20.303896 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 12 17:44:20.307296 waagent[1821]: 2025-09-12T17:44:20.306486Z INFO Daemon Daemon Detect protocol endpoint Sep 12 17:44:20.311673 waagent[1821]: 2025-09-12T17:44:20.311600Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 12 17:44:20.317816 waagent[1821]: 2025-09-12T17:44:20.317756Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 12 17:44:20.324812 waagent[1821]: 2025-09-12T17:44:20.324754Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 12 17:44:20.330661 waagent[1821]: 2025-09-12T17:44:20.330600Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 12 17:44:20.336399 waagent[1821]: 2025-09-12T17:44:20.336338Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 12 17:44:20.384925 waagent[1821]: 2025-09-12T17:44:20.384875Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 12 17:44:20.392352 waagent[1821]: 2025-09-12T17:44:20.392320Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 12 17:44:20.398158 waagent[1821]: 2025-09-12T17:44:20.398099Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 12 17:44:20.549096 waagent[1821]: 2025-09-12T17:44:20.548954Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 12 17:44:20.556766 waagent[1821]: 2025-09-12T17:44:20.556689Z INFO Daemon Daemon Forcing an update of the goal state. Sep 12 17:44:20.566729 waagent[1821]: 2025-09-12T17:44:20.566678Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 12 17:44:20.589165 waagent[1821]: 2025-09-12T17:44:20.589118Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 12 17:44:20.595086 waagent[1821]: 2025-09-12T17:44:20.595036Z INFO Daemon Sep 12 17:44:20.598262 waagent[1821]: 2025-09-12T17:44:20.598197Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 328512a1-1a17-4e04-adbd-a3908d5f2d1b eTag: 6002005820384299741 source: Fabric] Sep 12 17:44:20.609725 waagent[1821]: 2025-09-12T17:44:20.609676Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 12 17:44:20.617152 waagent[1821]: 2025-09-12T17:44:20.617103Z INFO Daemon Sep 12 17:44:20.620151 waagent[1821]: 2025-09-12T17:44:20.620106Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 12 17:44:20.632063 waagent[1821]: 2025-09-12T17:44:20.632020Z INFO Daemon Daemon Downloading artifacts profile blob Sep 12 17:44:20.710513 waagent[1821]: 2025-09-12T17:44:20.710421Z INFO Daemon Downloaded certificate {'thumbprint': '74F585589257792C17A693FB8E2A1CD5267F22EE', 'hasPrivateKey': True} Sep 12 17:44:20.720542 waagent[1821]: 2025-09-12T17:44:20.720485Z INFO Daemon Fetch goal state completed Sep 12 17:44:20.732277 waagent[1821]: 2025-09-12T17:44:20.732193Z INFO Daemon Daemon Starting provisioning Sep 12 17:44:20.737767 waagent[1821]: 2025-09-12T17:44:20.737701Z INFO Daemon Daemon Handle ovf-env.xml. Sep 12 17:44:20.743732 waagent[1821]: 2025-09-12T17:44:20.743679Z INFO Daemon Daemon Set hostname [ci-4081.3.6-a-ca65cd0ccc] Sep 12 17:44:20.768667 waagent[1821]: 2025-09-12T17:44:20.768590Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-a-ca65cd0ccc] Sep 12 17:44:20.776275 waagent[1821]: 2025-09-12T17:44:20.776195Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 12 17:44:20.782535 waagent[1821]: 2025-09-12T17:44:20.782474Z INFO Daemon Daemon Primary interface is [eth0] Sep 12 17:44:20.850893 systemd-networkd[1576]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:44:20.850901 systemd-networkd[1576]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:44:20.850928 systemd-networkd[1576]: eth0: DHCP lease lost Sep 12 17:44:20.852410 waagent[1821]: 2025-09-12T17:44:20.852294Z INFO Daemon Daemon Create user account if not exists Sep 12 17:44:20.858328 waagent[1821]: 2025-09-12T17:44:20.858191Z INFO Daemon Daemon User core already exists, skip useradd Sep 12 17:44:20.859353 systemd-networkd[1576]: eth0: DHCPv6 lease lost Sep 12 17:44:20.865046 waagent[1821]: 2025-09-12T17:44:20.864964Z INFO Daemon Daemon Configure sudoer Sep 12 17:44:20.870483 waagent[1821]: 2025-09-12T17:44:20.870410Z INFO Daemon Daemon Configure sshd Sep 12 17:44:20.876247 waagent[1821]: 2025-09-12T17:44:20.876155Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 12 17:44:20.889817 waagent[1821]: 2025-09-12T17:44:20.889736Z INFO Daemon Daemon Deploy ssh public key. Sep 12 17:44:20.898340 systemd-networkd[1576]: eth0: DHCPv4 address 10.200.20.46/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 12 17:44:22.067194 waagent[1821]: 2025-09-12T17:44:22.067130Z INFO Daemon Daemon Provisioning complete Sep 12 17:44:22.087469 waagent[1821]: 2025-09-12T17:44:22.087420Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 12 17:44:22.094949 waagent[1821]: 2025-09-12T17:44:22.094772Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 12 17:44:22.105607 waagent[1821]: 2025-09-12T17:44:22.105544Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Sep 12 17:44:22.241804 waagent[1896]: 2025-09-12T17:44:22.241106Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Sep 12 17:44:22.241804 waagent[1896]: 2025-09-12T17:44:22.241273Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Sep 12 17:44:22.241804 waagent[1896]: 2025-09-12T17:44:22.241334Z INFO ExtHandler ExtHandler Python: 3.11.9 Sep 12 17:44:22.777267 waagent[1896]: 2025-09-12T17:44:22.776380Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 12 17:44:22.777267 waagent[1896]: 2025-09-12T17:44:22.776633Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 17:44:22.777267 waagent[1896]: 2025-09-12T17:44:22.776693Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 17:44:22.785522 waagent[1896]: 2025-09-12T17:44:22.785435Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 12 17:44:22.791773 waagent[1896]: 2025-09-12T17:44:22.791725Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 12 17:44:22.792323 waagent[1896]: 2025-09-12T17:44:22.792280Z INFO ExtHandler Sep 12 17:44:22.792398 waagent[1896]: 2025-09-12T17:44:22.792369Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 59597f00-b548-4bbf-94b4-3407f8ac5872 eTag: 6002005820384299741 source: Fabric] Sep 12 17:44:22.792701 waagent[1896]: 2025-09-12T17:44:22.792662Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 12 17:44:22.793339 waagent[1896]: 2025-09-12T17:44:22.793290Z INFO ExtHandler Sep 12 17:44:22.793413 waagent[1896]: 2025-09-12T17:44:22.793383Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 12 17:44:22.797915 waagent[1896]: 2025-09-12T17:44:22.797874Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 12 17:44:22.896446 waagent[1896]: 2025-09-12T17:44:22.896345Z INFO ExtHandler Downloaded certificate {'thumbprint': '74F585589257792C17A693FB8E2A1CD5267F22EE', 'hasPrivateKey': True} Sep 12 17:44:22.897030 waagent[1896]: 2025-09-12T17:44:22.896973Z INFO ExtHandler Fetch goal state completed Sep 12 17:44:22.913783 waagent[1896]: 2025-09-12T17:44:22.913719Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1896 Sep 12 17:44:22.913958 waagent[1896]: 2025-09-12T17:44:22.913923Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 12 17:44:22.915686 waagent[1896]: 2025-09-12T17:44:22.915639Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Sep 12 17:44:22.916059 waagent[1896]: 2025-09-12T17:44:22.916019Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 12 17:44:22.994668 waagent[1896]: 2025-09-12T17:44:22.994220Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 12 17:44:22.994668 waagent[1896]: 2025-09-12T17:44:22.994447Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 12 17:44:23.001313 waagent[1896]: 2025-09-12T17:44:23.001273Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 12 17:44:23.008371 systemd[1]: Reloading requested from client PID 1909 ('systemctl') (unit waagent.service)... Sep 12 17:44:23.008678 systemd[1]: Reloading... Sep 12 17:44:23.075275 zram_generator::config[1939]: No configuration found. Sep 12 17:44:23.205617 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:44:23.282289 systemd[1]: Reloading finished in 273 ms. Sep 12 17:44:23.307123 waagent[1896]: 2025-09-12T17:44:23.307031Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Sep 12 17:44:23.313465 systemd[1]: Reloading requested from client PID 1997 ('systemctl') (unit waagent.service)... Sep 12 17:44:23.313625 systemd[1]: Reloading... Sep 12 17:44:23.401297 zram_generator::config[2031]: No configuration found. Sep 12 17:44:23.504150 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:44:23.579550 systemd[1]: Reloading finished in 265 ms. Sep 12 17:44:23.603422 waagent[1896]: 2025-09-12T17:44:23.602619Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 12 17:44:23.603422 waagent[1896]: 2025-09-12T17:44:23.602784Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 12 17:44:24.103264 waagent[1896]: 2025-09-12T17:44:24.102200Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 12 17:44:24.103264 waagent[1896]: 2025-09-12T17:44:24.102829Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Sep 12 17:44:24.103852 waagent[1896]: 2025-09-12T17:44:24.103794Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 12 17:44:24.103972 waagent[1896]: 2025-09-12T17:44:24.103924Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 17:44:24.104095 waagent[1896]: 2025-09-12T17:44:24.104058Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 17:44:24.104372 waagent[1896]: 2025-09-12T17:44:24.104321Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 12 17:44:24.104875 waagent[1896]: 2025-09-12T17:44:24.104819Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 12 17:44:24.105341 waagent[1896]: 2025-09-12T17:44:24.105279Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 12 17:44:24.105570 waagent[1896]: 2025-09-12T17:44:24.105525Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 12 17:44:24.105570 waagent[1896]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 12 17:44:24.105570 waagent[1896]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 12 17:44:24.105570 waagent[1896]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 12 17:44:24.105570 waagent[1896]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 12 17:44:24.105570 waagent[1896]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 12 17:44:24.105570 waagent[1896]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 12 17:44:24.106177 waagent[1896]: 2025-09-12T17:44:24.106101Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 12 17:44:24.106227 waagent[1896]: 2025-09-12T17:44:24.106182Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 17:44:24.106595 waagent[1896]: 2025-09-12T17:44:24.106384Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 17:44:24.106796 waagent[1896]: 2025-09-12T17:44:24.106677Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 12 17:44:24.106892 waagent[1896]: 2025-09-12T17:44:24.106841Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 12 17:44:24.107186 waagent[1896]: 2025-09-12T17:44:24.107148Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 12 17:44:24.107186 waagent[1896]: 2025-09-12T17:44:24.107017Z INFO EnvHandler ExtHandler Configure routes Sep 12 17:44:24.107612 waagent[1896]: 2025-09-12T17:44:24.107491Z INFO EnvHandler ExtHandler Gateway:None Sep 12 17:44:24.108147 waagent[1896]: 2025-09-12T17:44:24.108112Z INFO EnvHandler ExtHandler Routes:None Sep 12 17:44:24.114358 waagent[1896]: 2025-09-12T17:44:24.114290Z INFO ExtHandler ExtHandler Sep 12 17:44:24.114776 waagent[1896]: 2025-09-12T17:44:24.114711Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 2d1cf71b-37a2-4fa9-82c9-0701763a5d74 correlation dfa2fd52-9b01-4c0d-ad0c-92f1a7ce434d created: 2025-09-12T17:42:55.405273Z] Sep 12 17:44:24.116276 waagent[1896]: 2025-09-12T17:44:24.115501Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 12 17:44:24.116276 waagent[1896]: 2025-09-12T17:44:24.116059Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Sep 12 17:44:24.161573 waagent[1896]: 2025-09-12T17:44:24.161510Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: BE66D735-F1ED-45C4-B002-1D1C457AC04E;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Sep 12 17:44:24.241116 waagent[1896]: 2025-09-12T17:44:24.240674Z INFO MonitorHandler ExtHandler Network interfaces: Sep 12 17:44:24.241116 waagent[1896]: Executing ['ip', '-a', '-o', 'link']: Sep 12 17:44:24.241116 waagent[1896]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 12 17:44:24.241116 waagent[1896]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:79:24:9a brd ff:ff:ff:ff:ff:ff Sep 12 17:44:24.241116 waagent[1896]: 3: enP36267s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:79:24:9a brd ff:ff:ff:ff:ff:ff\ altname enP36267p0s2 Sep 12 17:44:24.241116 waagent[1896]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 12 17:44:24.241116 waagent[1896]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 12 17:44:24.241116 waagent[1896]: 2: eth0 inet 10.200.20.46/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 12 17:44:24.241116 waagent[1896]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 12 17:44:24.241116 waagent[1896]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 12 17:44:24.241116 waagent[1896]: 2: eth0 inet6 fe80::222:48ff:fe79:249a/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 12 17:44:24.299271 waagent[1896]: 2025-09-12T17:44:24.298826Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Sep 12 17:44:24.299271 waagent[1896]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:44:24.299271 waagent[1896]: pkts bytes target prot opt in out source destination Sep 12 17:44:24.299271 waagent[1896]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:44:24.299271 waagent[1896]: pkts bytes target prot opt in out source destination Sep 12 17:44:24.299271 waagent[1896]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:44:24.299271 waagent[1896]: pkts bytes target prot opt in out source destination Sep 12 17:44:24.299271 waagent[1896]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 12 17:44:24.299271 waagent[1896]: 7 881 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 12 17:44:24.299271 waagent[1896]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 12 17:44:24.302862 waagent[1896]: 2025-09-12T17:44:24.302790Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 12 17:44:24.302862 waagent[1896]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:44:24.302862 waagent[1896]: pkts bytes target prot opt in out source destination Sep 12 17:44:24.302862 waagent[1896]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:44:24.302862 waagent[1896]: pkts bytes target prot opt in out source destination Sep 12 17:44:24.302862 waagent[1896]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:44:24.302862 waagent[1896]: pkts bytes target prot opt in out source destination Sep 12 17:44:24.302862 waagent[1896]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 12 17:44:24.302862 waagent[1896]: 8 933 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 12 17:44:24.302862 waagent[1896]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 12 17:44:24.303114 waagent[1896]: 2025-09-12T17:44:24.303084Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 12 17:44:28.767362 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:44:28.774784 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:44:28.872640 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:44:28.886544 (kubelet)[2124]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:44:28.987243 kubelet[2124]: E0912 17:44:28.987144 2124 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:44:28.990532 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:44:28.990811 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:44:39.017579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:44:39.029426 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:44:39.128820 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:44:39.132559 (kubelet)[2139]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:44:39.248052 kubelet[2139]: E0912 17:44:39.248001 2139 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:44:39.250546 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:44:39.250776 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:44:40.680712 chronyd[1682]: Selected source PHC0 Sep 12 17:44:42.156216 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:44:42.157894 systemd[1]: Started sshd@0-10.200.20.46:22-10.200.16.10:32846.service - OpenSSH per-connection server daemon (10.200.16.10:32846). Sep 12 17:44:42.747675 sshd[2146]: Accepted publickey for core from 10.200.16.10 port 32846 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:44:42.748958 sshd[2146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:42.753408 systemd-logind[1692]: New session 3 of user core. Sep 12 17:44:42.760403 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:44:43.165632 systemd[1]: Started sshd@1-10.200.20.46:22-10.200.16.10:32850.service - OpenSSH per-connection server daemon (10.200.16.10:32850). Sep 12 17:44:43.654052 sshd[2151]: Accepted publickey for core from 10.200.16.10 port 32850 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:44:43.655374 sshd[2151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:43.659524 systemd-logind[1692]: New session 4 of user core. Sep 12 17:44:43.665377 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:44:44.018675 sshd[2151]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:44.021901 systemd[1]: sshd@1-10.200.20.46:22-10.200.16.10:32850.service: Deactivated successfully. Sep 12 17:44:44.023430 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:44:44.024025 systemd-logind[1692]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:44:44.024997 systemd-logind[1692]: Removed session 4. Sep 12 17:44:44.102575 systemd[1]: Started sshd@2-10.200.20.46:22-10.200.16.10:32854.service - OpenSSH per-connection server daemon (10.200.16.10:32854). Sep 12 17:44:44.556927 sshd[2158]: Accepted publickey for core from 10.200.16.10 port 32854 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:44:44.558261 sshd[2158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:44.561929 systemd-logind[1692]: New session 5 of user core. Sep 12 17:44:44.572434 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:44:44.885631 sshd[2158]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:44.888532 systemd[1]: sshd@2-10.200.20.46:22-10.200.16.10:32854.service: Deactivated successfully. Sep 12 17:44:44.890101 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:44:44.891823 systemd-logind[1692]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:44:44.892888 systemd-logind[1692]: Removed session 5. Sep 12 17:44:44.957582 systemd[1]: Started sshd@3-10.200.20.46:22-10.200.16.10:32868.service - OpenSSH per-connection server daemon (10.200.16.10:32868). Sep 12 17:44:45.364972 sshd[2165]: Accepted publickey for core from 10.200.16.10 port 32868 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:44:45.366217 sshd[2165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:45.370816 systemd-logind[1692]: New session 6 of user core. Sep 12 17:44:45.376420 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:44:45.671004 sshd[2165]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:45.673939 systemd[1]: sshd@3-10.200.20.46:22-10.200.16.10:32868.service: Deactivated successfully. Sep 12 17:44:45.676006 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:44:45.677185 systemd-logind[1692]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:44:45.678156 systemd-logind[1692]: Removed session 6. Sep 12 17:44:45.752569 systemd[1]: Started sshd@4-10.200.20.46:22-10.200.16.10:32870.service - OpenSSH per-connection server daemon (10.200.16.10:32870). Sep 12 17:44:46.203499 sshd[2172]: Accepted publickey for core from 10.200.16.10 port 32870 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:44:46.204756 sshd[2172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:46.209375 systemd-logind[1692]: New session 7 of user core. Sep 12 17:44:46.216399 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:44:46.696323 sudo[2175]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:44:46.696595 sudo[2175]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:44:46.734925 sudo[2175]: pam_unix(sudo:session): session closed for user root Sep 12 17:44:46.800404 sshd[2172]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:46.803345 systemd[1]: sshd@4-10.200.20.46:22-10.200.16.10:32870.service: Deactivated successfully. Sep 12 17:44:46.804893 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:44:46.806275 systemd-logind[1692]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:44:46.807186 systemd-logind[1692]: Removed session 7. Sep 12 17:44:46.874146 systemd[1]: Started sshd@5-10.200.20.46:22-10.200.16.10:32882.service - OpenSSH per-connection server daemon (10.200.16.10:32882). Sep 12 17:44:47.284355 sshd[2180]: Accepted publickey for core from 10.200.16.10 port 32882 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:44:47.285682 sshd[2180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:47.289222 systemd-logind[1692]: New session 8 of user core. Sep 12 17:44:47.297396 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:44:47.519578 sudo[2184]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:44:47.519847 sudo[2184]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:44:47.522895 sudo[2184]: pam_unix(sudo:session): session closed for user root Sep 12 17:44:47.527282 sudo[2183]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:44:47.527537 sudo[2183]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:44:47.545496 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:44:47.547435 auditctl[2187]: No rules Sep 12 17:44:47.547730 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:44:47.547896 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:44:47.550586 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:44:47.573156 augenrules[2205]: No rules Sep 12 17:44:47.574542 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:44:47.575933 sudo[2183]: pam_unix(sudo:session): session closed for user root Sep 12 17:44:47.648557 sshd[2180]: pam_unix(sshd:session): session closed for user core Sep 12 17:44:47.652098 systemd[1]: sshd@5-10.200.20.46:22-10.200.16.10:32882.service: Deactivated successfully. Sep 12 17:44:47.653648 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:44:47.654416 systemd-logind[1692]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:44:47.655173 systemd-logind[1692]: Removed session 8. Sep 12 17:44:47.734726 systemd[1]: Started sshd@6-10.200.20.46:22-10.200.16.10:32896.service - OpenSSH per-connection server daemon (10.200.16.10:32896). Sep 12 17:44:48.179519 sshd[2213]: Accepted publickey for core from 10.200.16.10 port 32896 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:44:48.180787 sshd[2213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:44:48.185386 systemd-logind[1692]: New session 9 of user core. Sep 12 17:44:48.190488 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:44:48.435396 sudo[2216]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:44:48.435667 sudo[2216]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:44:49.267779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 17:44:49.274622 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:44:49.393959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:44:49.398635 (kubelet)[2234]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:44:49.434680 kubelet[2234]: E0912 17:44:49.434584 2234 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:44:49.437373 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:44:49.437506 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:44:49.967535 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:44:49.968583 (dockerd)[2246]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:44:50.547880 dockerd[2246]: time="2025-09-12T17:44:50.547786602Z" level=info msg="Starting up" Sep 12 17:44:50.923188 dockerd[2246]: time="2025-09-12T17:44:50.922777570Z" level=info msg="Loading containers: start." Sep 12 17:44:51.158272 kernel: Initializing XFRM netlink socket Sep 12 17:44:51.408005 systemd-networkd[1576]: docker0: Link UP Sep 12 17:44:51.434992 dockerd[2246]: time="2025-09-12T17:44:51.434451144Z" level=info msg="Loading containers: done." Sep 12 17:44:51.444831 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck350876765-merged.mount: Deactivated successfully. Sep 12 17:44:51.461274 dockerd[2246]: time="2025-09-12T17:44:51.461119113Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:44:51.461274 dockerd[2246]: time="2025-09-12T17:44:51.461260633Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:44:51.461450 dockerd[2246]: time="2025-09-12T17:44:51.461374473Z" level=info msg="Daemon has completed initialization" Sep 12 17:44:51.532927 dockerd[2246]: time="2025-09-12T17:44:51.532756777Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:44:51.533200 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:44:52.424390 containerd[1714]: time="2025-09-12T17:44:52.424340841Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 12 17:44:53.315845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2274999286.mount: Deactivated successfully. Sep 12 17:44:54.609283 containerd[1714]: time="2025-09-12T17:44:54.608664079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:54.612119 containerd[1714]: time="2025-09-12T17:44:54.612069202Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390228" Sep 12 17:44:54.615836 containerd[1714]: time="2025-09-12T17:44:54.615777165Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:54.621052 containerd[1714]: time="2025-09-12T17:44:54.620972250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:54.622847 containerd[1714]: time="2025-09-12T17:44:54.622191931Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 2.19780389s" Sep 12 17:44:54.622847 containerd[1714]: time="2025-09-12T17:44:54.622260451Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Sep 12 17:44:54.624021 containerd[1714]: time="2025-09-12T17:44:54.623990293Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 12 17:44:55.791336 containerd[1714]: time="2025-09-12T17:44:55.791286907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:55.794537 containerd[1714]: time="2025-09-12T17:44:55.794298229Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547917" Sep 12 17:44:55.800093 containerd[1714]: time="2025-09-12T17:44:55.798952874Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:55.804597 containerd[1714]: time="2025-09-12T17:44:55.804546639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:55.805949 containerd[1714]: time="2025-09-12T17:44:55.805910400Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.181881867s" Sep 12 17:44:55.806079 containerd[1714]: time="2025-09-12T17:44:55.806062560Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Sep 12 17:44:55.806886 containerd[1714]: time="2025-09-12T17:44:55.806861601Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 12 17:44:56.832046 containerd[1714]: time="2025-09-12T17:44:56.831993726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:56.836668 containerd[1714]: time="2025-09-12T17:44:56.836624768Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295977" Sep 12 17:44:56.840543 containerd[1714]: time="2025-09-12T17:44:56.840511049Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:56.845277 containerd[1714]: time="2025-09-12T17:44:56.845191571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:56.846536 containerd[1714]: time="2025-09-12T17:44:56.846369251Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.03938901s" Sep 12 17:44:56.846536 containerd[1714]: time="2025-09-12T17:44:56.846404851Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Sep 12 17:44:56.847158 containerd[1714]: time="2025-09-12T17:44:56.846971532Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 12 17:44:57.810265 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Sep 12 17:44:57.885153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount941737364.mount: Deactivated successfully. Sep 12 17:44:58.238530 containerd[1714]: time="2025-09-12T17:44:58.238408059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:58.242504 containerd[1714]: time="2025-09-12T17:44:58.242461061Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240106" Sep 12 17:44:58.245505 containerd[1714]: time="2025-09-12T17:44:58.245455022Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:58.249617 containerd[1714]: time="2025-09-12T17:44:58.249583543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:44:58.250899 containerd[1714]: time="2025-09-12T17:44:58.250070903Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.402843731s" Sep 12 17:44:58.250899 containerd[1714]: time="2025-09-12T17:44:58.250106543Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Sep 12 17:44:58.251038 containerd[1714]: time="2025-09-12T17:44:58.251011624Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 12 17:44:58.943288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2944724901.mount: Deactivated successfully. Sep 12 17:44:59.517331 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 12 17:44:59.525431 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:44:59.765061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:44:59.775750 (kubelet)[2476]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:44:59.809018 kubelet[2476]: E0912 17:44:59.808974 2476 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:44:59.811640 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:44:59.811787 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:45:01.148846 containerd[1714]: time="2025-09-12T17:45:01.148787159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:45:01.152637 containerd[1714]: time="2025-09-12T17:45:01.152593200Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Sep 12 17:45:01.156164 containerd[1714]: time="2025-09-12T17:45:01.156110522Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:45:01.162267 containerd[1714]: time="2025-09-12T17:45:01.161909964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:45:01.163205 containerd[1714]: time="2025-09-12T17:45:01.163063204Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.91201794s" Sep 12 17:45:01.163205 containerd[1714]: time="2025-09-12T17:45:01.163102604Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 12 17:45:01.164261 containerd[1714]: time="2025-09-12T17:45:01.164188884Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:45:01.798511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4114482524.mount: Deactivated successfully. Sep 12 17:45:01.825262 containerd[1714]: time="2025-09-12T17:45:01.825097076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:45:01.828660 containerd[1714]: time="2025-09-12T17:45:01.828623797Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 12 17:45:01.832136 containerd[1714]: time="2025-09-12T17:45:01.832103438Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:45:01.837566 containerd[1714]: time="2025-09-12T17:45:01.837517960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:45:01.839055 containerd[1714]: time="2025-09-12T17:45:01.839019001Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 674.796877ms" Sep 12 17:45:01.839100 containerd[1714]: time="2025-09-12T17:45:01.839057681Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 17:45:01.839507 containerd[1714]: time="2025-09-12T17:45:01.839482761Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 12 17:45:02.427839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2196377860.mount: Deactivated successfully. Sep 12 17:45:02.495684 update_engine[1694]: I20250912 17:45:02.495620 1694 update_attempter.cc:509] Updating boot flags... Sep 12 17:45:02.561271 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2542) Sep 12 17:45:02.677434 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2545) Sep 12 17:45:05.633287 containerd[1714]: time="2025-09-12T17:45:05.633215238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:45:05.637184 containerd[1714]: time="2025-09-12T17:45:05.637145761Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465857" Sep 12 17:45:05.641277 containerd[1714]: time="2025-09-12T17:45:05.641228285Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:45:05.646264 containerd[1714]: time="2025-09-12T17:45:05.646185049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:45:05.647532 containerd[1714]: time="2025-09-12T17:45:05.647372610Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.807857449s" Sep 12 17:45:05.647532 containerd[1714]: time="2025-09-12T17:45:05.647409930Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 12 17:45:10.017374 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 12 17:45:10.024577 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:45:10.369474 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:45:10.383091 (kubelet)[2675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:45:10.426322 kubelet[2675]: E0912 17:45:10.426282 2675 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:45:10.428518 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:45:10.428774 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:45:12.457939 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:45:12.470486 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:45:12.501631 systemd[1]: Reloading requested from client PID 2690 ('systemctl') (unit session-9.scope)... Sep 12 17:45:12.501783 systemd[1]: Reloading... Sep 12 17:45:12.616263 zram_generator::config[2730]: No configuration found. Sep 12 17:45:12.711886 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:45:12.790485 systemd[1]: Reloading finished in 288 ms. Sep 12 17:45:12.835681 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:45:12.835920 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:45:12.836303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:45:12.841573 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:45:13.003689 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:45:13.019541 (kubelet)[2798]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:45:13.137871 kubelet[2798]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:45:13.137871 kubelet[2798]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:45:13.137871 kubelet[2798]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:45:13.137871 kubelet[2798]: I0912 17:45:13.137304 2798 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:45:13.857468 kubelet[2798]: I0912 17:45:13.857429 2798 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 17:45:13.857468 kubelet[2798]: I0912 17:45:13.857459 2798 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:45:13.857710 kubelet[2798]: I0912 17:45:13.857689 2798 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 17:45:13.875837 kubelet[2798]: E0912 17:45:13.875782 2798 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.46:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 17:45:13.882478 kubelet[2798]: I0912 17:45:13.881921 2798 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:45:13.891595 kubelet[2798]: E0912 17:45:13.891556 2798 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:45:13.891759 kubelet[2798]: I0912 17:45:13.891747 2798 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:45:13.894851 kubelet[2798]: I0912 17:45:13.894825 2798 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:45:13.896355 kubelet[2798]: I0912 17:45:13.896321 2798 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:45:13.896615 kubelet[2798]: I0912 17:45:13.896450 2798 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-a-ca65cd0ccc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:45:13.897028 kubelet[2798]: I0912 17:45:13.896732 2798 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:45:13.897028 kubelet[2798]: I0912 17:45:13.896747 2798 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 17:45:13.897028 kubelet[2798]: I0912 17:45:13.896880 2798 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:45:13.899817 kubelet[2798]: I0912 17:45:13.899800 2798 kubelet.go:480] "Attempting to sync node with API server" Sep 12 17:45:13.899907 kubelet[2798]: I0912 17:45:13.899898 2798 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:45:13.899984 kubelet[2798]: I0912 17:45:13.899976 2798 kubelet.go:386] "Adding apiserver pod source" Sep 12 17:45:13.901261 kubelet[2798]: I0912 17:45:13.901242 2798 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:45:13.904913 kubelet[2798]: E0912 17:45:13.904873 2798 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-a-ca65cd0ccc&limit=500&resourceVersion=0\": dial tcp 10.200.20.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:45:13.905559 kubelet[2798]: E0912 17:45:13.905279 2798 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 17:45:13.905559 kubelet[2798]: I0912 17:45:13.905422 2798 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:45:13.906029 kubelet[2798]: I0912 17:45:13.906008 2798 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 17:45:13.906608 kubelet[2798]: W0912 17:45:13.906073 2798 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:45:13.909859 kubelet[2798]: I0912 17:45:13.909807 2798 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:45:13.909859 kubelet[2798]: I0912 17:45:13.909858 2798 server.go:1289] "Started kubelet" Sep 12 17:45:13.910417 kubelet[2798]: I0912 17:45:13.910389 2798 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:45:13.911276 kubelet[2798]: I0912 17:45:13.911257 2798 server.go:317] "Adding debug handlers to kubelet server" Sep 12 17:45:13.911955 kubelet[2798]: I0912 17:45:13.911879 2798 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:45:13.912253 kubelet[2798]: I0912 17:45:13.912207 2798 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:45:13.913809 kubelet[2798]: E0912 17:45:13.912361 2798 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.46:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.46:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-a-ca65cd0ccc.18649a080a9d3aa4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-a-ca65cd0ccc,UID:ci-4081.3.6-a-ca65cd0ccc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-a-ca65cd0ccc,},FirstTimestamp:2025-09-12 17:45:13.90982826 +0000 UTC m=+0.887012621,LastTimestamp:2025-09-12 17:45:13.90982826 +0000 UTC m=+0.887012621,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-a-ca65cd0ccc,}" Sep 12 17:45:13.916224 kubelet[2798]: I0912 17:45:13.915353 2798 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:45:13.916696 kubelet[2798]: I0912 17:45:13.915480 2798 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:45:13.916864 kubelet[2798]: I0912 17:45:13.916839 2798 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:45:13.919790 kubelet[2798]: E0912 17:45:13.919755 2798 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-ca65cd0ccc\" not found" Sep 12 17:45:13.920280 kubelet[2798]: E0912 17:45:13.920249 2798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-a-ca65cd0ccc?timeout=10s\": dial tcp 10.200.20.46:6443: connect: connection refused" interval="200ms" Sep 12 17:45:13.920725 kubelet[2798]: I0912 17:45:13.920705 2798 factory.go:223] Registration of the systemd container factory successfully Sep 12 17:45:13.921501 kubelet[2798]: I0912 17:45:13.921471 2798 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:45:13.923862 kubelet[2798]: E0912 17:45:13.923841 2798 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:45:13.924493 kubelet[2798]: I0912 17:45:13.924474 2798 factory.go:223] Registration of the containerd container factory successfully Sep 12 17:45:13.931166 kubelet[2798]: I0912 17:45:13.931130 2798 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:45:13.931302 kubelet[2798]: I0912 17:45:13.931213 2798 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:45:13.933999 kubelet[2798]: E0912 17:45:13.933958 2798 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 17:45:13.974199 kubelet[2798]: I0912 17:45:13.974157 2798 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:45:13.974343 kubelet[2798]: I0912 17:45:13.974274 2798 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:45:13.974343 kubelet[2798]: I0912 17:45:13.974294 2798 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:45:13.981029 kubelet[2798]: I0912 17:45:13.980987 2798 policy_none.go:49] "None policy: Start" Sep 12 17:45:13.981029 kubelet[2798]: I0912 17:45:13.981018 2798 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:45:13.981029 kubelet[2798]: I0912 17:45:13.981030 2798 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:45:13.990667 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:45:13.997382 kubelet[2798]: I0912 17:45:13.997327 2798 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 17:45:14.001707 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:45:14.004139 kubelet[2798]: I0912 17:45:14.004108 2798 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 17:45:14.004139 kubelet[2798]: I0912 17:45:14.004137 2798 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 17:45:14.004322 kubelet[2798]: I0912 17:45:14.004153 2798 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:45:14.004322 kubelet[2798]: I0912 17:45:14.004159 2798 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 17:45:14.004322 kubelet[2798]: E0912 17:45:14.004198 2798 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:45:14.005941 kubelet[2798]: E0912 17:45:14.005355 2798 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 17:45:14.006979 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:45:14.016157 kubelet[2798]: E0912 17:45:14.016118 2798 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 17:45:14.016588 kubelet[2798]: I0912 17:45:14.016328 2798 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:45:14.016588 kubelet[2798]: I0912 17:45:14.016345 2798 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:45:14.016685 kubelet[2798]: I0912 17:45:14.016637 2798 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:45:14.019345 kubelet[2798]: E0912 17:45:14.019309 2798 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:45:14.019438 kubelet[2798]: E0912 17:45:14.019367 2798 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-a-ca65cd0ccc\" not found" Sep 12 17:45:14.120796 kubelet[2798]: I0912 17:45:14.120692 2798 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:14.121676 kubelet[2798]: E0912 17:45:14.121034 2798 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.46:6443/api/v1/nodes\": dial tcp 10.200.20.46:6443: connect: connection refused" node="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:14.121676 kubelet[2798]: E0912 17:45:14.121591 2798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-a-ca65cd0ccc?timeout=10s\": dial tcp 10.200.20.46:6443: connect: connection refused" interval="400ms" Sep 12 17:45:14.133099 kubelet[2798]: I0912 17:45:14.133048 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98f79bd3b5bcf99ad53cee5360026780-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-a-ca65cd0ccc\" (UID: \"98f79bd3b5bcf99ad53cee5360026780\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:14.133099 kubelet[2798]: I0912 17:45:14.133108 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98f79bd3b5bcf99ad53cee5360026780-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-a-ca65cd0ccc\" (UID: \"98f79bd3b5bcf99ad53cee5360026780\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:14.133228 kubelet[2798]: I0912 17:45:14.133129 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98f79bd3b5bcf99ad53cee5360026780-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-a-ca65cd0ccc\" (UID: \"98f79bd3b5bcf99ad53cee5360026780\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:14.192707 systemd[1]: Created slice kubepods-burstable-pod98f79bd3b5bcf99ad53cee5360026780.slice - libcontainer container kubepods-burstable-pod98f79bd3b5bcf99ad53cee5360026780.slice. Sep 12 17:45:14.199005 kubelet[2798]: E0912 17:45:14.198853 2798 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-a-ca65cd0ccc\" not found" node="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:14.204403 systemd[1]: Created slice kubepods-burstable-pod5336d4e16fa1531aaad4cbe14e2d4b4b.slice - libcontainer container kubepods-burstable-pod5336d4e16fa1531aaad4cbe14e2d4b4b.slice. Sep 12 17:45:14.206201 kubelet[2798]: E0912 17:45:14.206164 2798 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-a-ca65cd0ccc\" not found" node="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:14.217994 systemd[1]: Created slice kubepods-burstable-pod8c8c98de8ccca5c9cb5eb5e1754a9139.slice - libcontainer container kubepods-burstable-pod8c8c98de8ccca5c9cb5eb5e1754a9139.slice. Sep 12 17:45:14.219680 kubelet[2798]: E0912 17:45:14.219649 2798 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-a-ca65cd0ccc\" not found" node="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:14.234297 kubelet[2798]: I0912 17:45:14.234263 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5336d4e16fa1531aaad4cbe14e2d4b4b-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-a-ca65cd0ccc\" (UID: \"5336d4e16fa1531aaad4cbe14e2d4b4b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:14.234573 kubelet[2798]: I0912 17:45:14.234445 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5336d4e16fa1531aaad4cbe14e2d4b4b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-a-ca65cd0ccc\" (UID: \"5336d4e16fa1531aaad4cbe14e2d4b4b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:14.234573 kubelet[2798]: I0912 17:45:14.234473 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5336d4e16fa1531aaad4cbe14e2d4b4b-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-a-ca65cd0ccc\" (UID: \"5336d4e16fa1531aaad4cbe14e2d4b4b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:14.234573 kubelet[2798]: I0912 17:45:14.234489 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5336d4e16fa1531aaad4cbe14e2d4b4b-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-a-ca65cd0ccc\" (UID: \"5336d4e16fa1531aaad4cbe14e2d4b4b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:14.234573 kubelet[2798]: I0912 17:45:14.234524 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5336d4e16fa1531aaad4cbe14e2d4b4b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-a-ca65cd0ccc\" (UID: \"5336d4e16fa1531aaad4cbe14e2d4b4b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:14.234573 kubelet[2798]: I0912 17:45:14.234542 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c8c98de8ccca5c9cb5eb5e1754a9139-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-a-ca65cd0ccc\" (UID: \"8c8c98de8ccca5c9cb5eb5e1754a9139\") " pod="kube-system/kube-scheduler-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:14.323268 kubelet[2798]: I0912 17:45:14.323215 2798 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:14.323573 kubelet[2798]: E0912 17:45:14.323537 2798 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.46:6443/api/v1/nodes\": dial tcp 10.200.20.46:6443: connect: connection refused" node="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:14.500825 containerd[1714]: time="2025-09-12T17:45:14.500714872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-a-ca65cd0ccc,Uid:98f79bd3b5bcf99ad53cee5360026780,Namespace:kube-system,Attempt:0,}" Sep 12 17:45:14.508024 containerd[1714]: time="2025-09-12T17:45:14.507982318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-a-ca65cd0ccc,Uid:5336d4e16fa1531aaad4cbe14e2d4b4b,Namespace:kube-system,Attempt:0,}" Sep 12 17:45:14.520989 containerd[1714]: time="2025-09-12T17:45:14.520950049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-a-ca65cd0ccc,Uid:8c8c98de8ccca5c9cb5eb5e1754a9139,Namespace:kube-system,Attempt:0,}" Sep 12 17:45:14.522646 kubelet[2798]: E0912 17:45:14.522591 2798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-a-ca65cd0ccc?timeout=10s\": dial tcp 10.200.20.46:6443: connect: connection refused" interval="800ms" Sep 12 17:45:14.725569 kubelet[2798]: I0912 17:45:14.725307 2798 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:14.725764 kubelet[2798]: E0912 17:45:14.725742 2798 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.46:6443/api/v1/nodes\": dial tcp 10.200.20.46:6443: connect: connection refused" node="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:14.791557 kubelet[2798]: E0912 17:45:14.791517 2798 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 17:45:14.818227 kubelet[2798]: E0912 17:45:14.818191 2798 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 17:45:15.108345 kubelet[2798]: E0912 17:45:15.108073 2798 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-a-ca65cd0ccc&limit=500&resourceVersion=0\": dial tcp 10.200.20.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:45:15.323479 kubelet[2798]: E0912 17:45:15.323434 2798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-a-ca65cd0ccc?timeout=10s\": dial tcp 10.200.20.46:6443: connect: connection refused" interval="1.6s" Sep 12 17:45:15.341034 kubelet[2798]: E0912 17:45:15.340984 2798 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 17:45:15.529542 kubelet[2798]: I0912 17:45:15.529498 2798 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:15.529868 kubelet[2798]: E0912 17:45:15.529827 2798 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.46:6443/api/v1/nodes\": dial tcp 10.200.20.46:6443: connect: connection refused" node="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:15.675580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2827106859.mount: Deactivated successfully. Sep 12 17:45:15.704437 containerd[1714]: time="2025-09-12T17:45:15.704382956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:45:15.708255 containerd[1714]: time="2025-09-12T17:45:15.708184640Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 12 17:45:15.711483 containerd[1714]: time="2025-09-12T17:45:15.711443042Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:45:15.716093 containerd[1714]: time="2025-09-12T17:45:15.715366526Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:45:15.719148 containerd[1714]: time="2025-09-12T17:45:15.719101769Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:45:15.723277 containerd[1714]: time="2025-09-12T17:45:15.722674972Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:45:15.725842 containerd[1714]: time="2025-09-12T17:45:15.725770734Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:45:15.731042 containerd[1714]: time="2025-09-12T17:45:15.731003419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:45:15.731994 containerd[1714]: time="2025-09-12T17:45:15.731761059Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.22369862s" Sep 12 17:45:15.736715 containerd[1714]: time="2025-09-12T17:45:15.736539343Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.215508174s" Sep 12 17:45:15.738265 containerd[1714]: time="2025-09-12T17:45:15.738152145Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.237354713s" Sep 12 17:45:15.975436 kubelet[2798]: E0912 17:45:15.975317 2798 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.46:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 17:45:16.463700 kubelet[2798]: E0912 17:45:16.463656 2798 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 17:45:16.499193 containerd[1714]: time="2025-09-12T17:45:16.498334419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:45:16.499193 containerd[1714]: time="2025-09-12T17:45:16.498383739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:45:16.499193 containerd[1714]: time="2025-09-12T17:45:16.498405899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:45:16.499193 containerd[1714]: time="2025-09-12T17:45:16.498483779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:45:16.504294 containerd[1714]: time="2025-09-12T17:45:16.503496423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:45:16.504294 containerd[1714]: time="2025-09-12T17:45:16.503559623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:45:16.504294 containerd[1714]: time="2025-09-12T17:45:16.503585343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:45:16.504294 containerd[1714]: time="2025-09-12T17:45:16.503889623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:45:16.505056 containerd[1714]: time="2025-09-12T17:45:16.504874184Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:45:16.505223 containerd[1714]: time="2025-09-12T17:45:16.505187104Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:45:16.505407 containerd[1714]: time="2025-09-12T17:45:16.505311104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:45:16.505585 containerd[1714]: time="2025-09-12T17:45:16.505540425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:45:16.546513 systemd[1]: Started cri-containerd-5eda4a4a94f6d11eb6fa5750d1b2c96c1550f2403329c7b9e02dfbb64c08c567.scope - libcontainer container 5eda4a4a94f6d11eb6fa5750d1b2c96c1550f2403329c7b9e02dfbb64c08c567. Sep 12 17:45:16.551764 systemd[1]: Started cri-containerd-87430471fc66e28089fdc8dcd10ecb746f63216e441a6636289ca5ff9e44191e.scope - libcontainer container 87430471fc66e28089fdc8dcd10ecb746f63216e441a6636289ca5ff9e44191e. Sep 12 17:45:16.553075 systemd[1]: Started cri-containerd-ccac77dcaa902fdf13d43484fd035af411e07a6dca1ebff539409c79250b6658.scope - libcontainer container ccac77dcaa902fdf13d43484fd035af411e07a6dca1ebff539409c79250b6658. Sep 12 17:45:16.604422 containerd[1714]: time="2025-09-12T17:45:16.602826026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-a-ca65cd0ccc,Uid:98f79bd3b5bcf99ad53cee5360026780,Namespace:kube-system,Attempt:0,} returns sandbox id \"5eda4a4a94f6d11eb6fa5750d1b2c96c1550f2403329c7b9e02dfbb64c08c567\"" Sep 12 17:45:16.609226 containerd[1714]: time="2025-09-12T17:45:16.609076951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-a-ca65cd0ccc,Uid:5336d4e16fa1531aaad4cbe14e2d4b4b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ccac77dcaa902fdf13d43484fd035af411e07a6dca1ebff539409c79250b6658\"" Sep 12 17:45:16.615500 containerd[1714]: time="2025-09-12T17:45:16.615464036Z" level=info msg="CreateContainer within sandbox \"5eda4a4a94f6d11eb6fa5750d1b2c96c1550f2403329c7b9e02dfbb64c08c567\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:45:16.619479 containerd[1714]: time="2025-09-12T17:45:16.619443160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-a-ca65cd0ccc,Uid:8c8c98de8ccca5c9cb5eb5e1754a9139,Namespace:kube-system,Attempt:0,} returns sandbox id \"87430471fc66e28089fdc8dcd10ecb746f63216e441a6636289ca5ff9e44191e\"" Sep 12 17:45:16.623108 containerd[1714]: time="2025-09-12T17:45:16.623061803Z" level=info msg="CreateContainer within sandbox \"ccac77dcaa902fdf13d43484fd035af411e07a6dca1ebff539409c79250b6658\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:45:16.628885 containerd[1714]: time="2025-09-12T17:45:16.628849487Z" level=info msg="CreateContainer within sandbox \"87430471fc66e28089fdc8dcd10ecb746f63216e441a6636289ca5ff9e44191e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:45:16.679698 containerd[1714]: time="2025-09-12T17:45:16.679642050Z" level=info msg="CreateContainer within sandbox \"5eda4a4a94f6d11eb6fa5750d1b2c96c1550f2403329c7b9e02dfbb64c08c567\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7dbe7cc885e49b8e648ca8fb1354ca660fa0cbd520e06453e4326b7a4058cd7b\"" Sep 12 17:45:16.680503 containerd[1714]: time="2025-09-12T17:45:16.680478691Z" level=info msg="StartContainer for \"7dbe7cc885e49b8e648ca8fb1354ca660fa0cbd520e06453e4326b7a4058cd7b\"" Sep 12 17:45:16.709287 containerd[1714]: time="2025-09-12T17:45:16.709141634Z" level=info msg="CreateContainer within sandbox \"ccac77dcaa902fdf13d43484fd035af411e07a6dca1ebff539409c79250b6658\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"999c8f0f7f29d9141ad914467a453d33b5c28e8849757aba5bd3f3d2cbf21646\"" Sep 12 17:45:16.711803 containerd[1714]: time="2025-09-12T17:45:16.711759757Z" level=info msg="StartContainer for \"999c8f0f7f29d9141ad914467a453d33b5c28e8849757aba5bd3f3d2cbf21646\"" Sep 12 17:45:16.717350 systemd[1]: Started cri-containerd-7dbe7cc885e49b8e648ca8fb1354ca660fa0cbd520e06453e4326b7a4058cd7b.scope - libcontainer container 7dbe7cc885e49b8e648ca8fb1354ca660fa0cbd520e06453e4326b7a4058cd7b. Sep 12 17:45:16.720010 containerd[1714]: time="2025-09-12T17:45:16.719309603Z" level=info msg="CreateContainer within sandbox \"87430471fc66e28089fdc8dcd10ecb746f63216e441a6636289ca5ff9e44191e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bf91b660dcf8b45f27d59bf0fc8c1c95e7f9c1b737f45a472cfb2a21fe3f83dd\"" Sep 12 17:45:16.721783 containerd[1714]: time="2025-09-12T17:45:16.720796604Z" level=info msg="StartContainer for \"bf91b660dcf8b45f27d59bf0fc8c1c95e7f9c1b737f45a472cfb2a21fe3f83dd\"" Sep 12 17:45:16.753501 systemd[1]: Started cri-containerd-999c8f0f7f29d9141ad914467a453d33b5c28e8849757aba5bd3f3d2cbf21646.scope - libcontainer container 999c8f0f7f29d9141ad914467a453d33b5c28e8849757aba5bd3f3d2cbf21646. Sep 12 17:45:16.774492 systemd[1]: Started cri-containerd-bf91b660dcf8b45f27d59bf0fc8c1c95e7f9c1b737f45a472cfb2a21fe3f83dd.scope - libcontainer container bf91b660dcf8b45f27d59bf0fc8c1c95e7f9c1b737f45a472cfb2a21fe3f83dd. Sep 12 17:45:16.783546 containerd[1714]: time="2025-09-12T17:45:16.783437456Z" level=info msg="StartContainer for \"7dbe7cc885e49b8e648ca8fb1354ca660fa0cbd520e06453e4326b7a4058cd7b\" returns successfully" Sep 12 17:45:16.832715 containerd[1714]: time="2025-09-12T17:45:16.831479376Z" level=info msg="StartContainer for \"999c8f0f7f29d9141ad914467a453d33b5c28e8849757aba5bd3f3d2cbf21646\" returns successfully" Sep 12 17:45:16.849370 containerd[1714]: time="2025-09-12T17:45:16.849326031Z" level=info msg="StartContainer for \"bf91b660dcf8b45f27d59bf0fc8c1c95e7f9c1b737f45a472cfb2a21fe3f83dd\" returns successfully" Sep 12 17:45:17.016588 kubelet[2798]: E0912 17:45:17.016493 2798 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-a-ca65cd0ccc\" not found" node="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:17.018504 kubelet[2798]: E0912 17:45:17.018330 2798 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-a-ca65cd0ccc\" not found" node="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:17.021157 kubelet[2798]: E0912 17:45:17.021001 2798 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-a-ca65cd0ccc\" not found" node="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:17.133003 kubelet[2798]: I0912 17:45:17.132336 2798 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:18.025844 kubelet[2798]: E0912 17:45:18.025810 2798 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-a-ca65cd0ccc\" not found" node="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:18.026824 kubelet[2798]: E0912 17:45:18.026577 2798 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-a-ca65cd0ccc\" not found" node="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:18.632150 kubelet[2798]: E0912 17:45:18.632098 2798 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-a-ca65cd0ccc\" not found" node="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:18.819710 kubelet[2798]: I0912 17:45:18.819598 2798 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:18.819710 kubelet[2798]: E0912 17:45:18.819659 2798 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-a-ca65cd0ccc\": node \"ci-4081.3.6-a-ca65cd0ccc\" not found" Sep 12 17:45:18.907895 kubelet[2798]: I0912 17:45:18.907585 2798 apiserver.go:52] "Watching apiserver" Sep 12 17:45:18.921260 kubelet[2798]: I0912 17:45:18.920913 2798 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:18.932302 kubelet[2798]: I0912 17:45:18.932244 2798 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:45:18.971824 kubelet[2798]: E0912 17:45:18.971761 2798 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-a-ca65cd0ccc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:18.971824 kubelet[2798]: I0912 17:45:18.971795 2798 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:18.973955 kubelet[2798]: E0912 17:45:18.973912 2798 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-a-ca65cd0ccc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:18.973955 kubelet[2798]: I0912 17:45:18.973948 2798 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:18.980480 kubelet[2798]: E0912 17:45:18.980433 2798 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-a-ca65cd0ccc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:19.026361 kubelet[2798]: I0912 17:45:19.023555 2798 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:19.026361 kubelet[2798]: I0912 17:45:19.023957 2798 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:19.027562 kubelet[2798]: E0912 17:45:19.027307 2798 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-a-ca65cd0ccc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:19.029137 kubelet[2798]: E0912 17:45:19.029029 2798 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-a-ca65cd0ccc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:21.570088 systemd[1]: Reloading requested from client PID 3080 ('systemctl') (unit session-9.scope)... Sep 12 17:45:21.570107 systemd[1]: Reloading... Sep 12 17:45:21.686304 zram_generator::config[3120]: No configuration found. Sep 12 17:45:21.798521 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:45:21.898435 systemd[1]: Reloading finished in 327 ms. Sep 12 17:45:21.931465 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:45:21.948228 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:45:21.948503 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:45:21.948573 systemd[1]: kubelet.service: Consumed 1.175s CPU time, 127.7M memory peak, 0B memory swap peak. Sep 12 17:45:21.953582 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:45:22.129279 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:45:22.140635 (kubelet)[3184]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:45:22.184674 kubelet[3184]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:45:22.184674 kubelet[3184]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:45:22.184674 kubelet[3184]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:45:22.184674 kubelet[3184]: I0912 17:45:22.184197 3184 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:45:22.192258 kubelet[3184]: I0912 17:45:22.191447 3184 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 17:45:22.192461 kubelet[3184]: I0912 17:45:22.192437 3184 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:45:22.192728 kubelet[3184]: I0912 17:45:22.192710 3184 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 17:45:22.194476 kubelet[3184]: I0912 17:45:22.194446 3184 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 12 17:45:22.198781 kubelet[3184]: I0912 17:45:22.198743 3184 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:45:22.202663 kubelet[3184]: E0912 17:45:22.202623 3184 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:45:22.203119 kubelet[3184]: I0912 17:45:22.202872 3184 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:45:22.209687 kubelet[3184]: I0912 17:45:22.209658 3184 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:45:22.210342 kubelet[3184]: I0912 17:45:22.210069 3184 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:45:22.210619 kubelet[3184]: I0912 17:45:22.210101 3184 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-a-ca65cd0ccc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:45:22.210757 kubelet[3184]: I0912 17:45:22.210743 3184 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:45:22.210808 kubelet[3184]: I0912 17:45:22.210800 3184 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 17:45:22.210906 kubelet[3184]: I0912 17:45:22.210897 3184 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:45:22.211265 kubelet[3184]: I0912 17:45:22.211098 3184 kubelet.go:480] "Attempting to sync node with API server" Sep 12 17:45:22.211265 kubelet[3184]: I0912 17:45:22.211115 3184 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:45:22.211265 kubelet[3184]: I0912 17:45:22.211143 3184 kubelet.go:386] "Adding apiserver pod source" Sep 12 17:45:22.211265 kubelet[3184]: I0912 17:45:22.211157 3184 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:45:22.216832 kubelet[3184]: I0912 17:45:22.216793 3184 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:45:22.217492 kubelet[3184]: I0912 17:45:22.217423 3184 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 17:45:22.228264 kubelet[3184]: I0912 17:45:22.225852 3184 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:45:22.228264 kubelet[3184]: I0912 17:45:22.225902 3184 server.go:1289] "Started kubelet" Sep 12 17:45:22.230453 kubelet[3184]: I0912 17:45:22.230422 3184 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:45:22.249379 kubelet[3184]: I0912 17:45:22.249324 3184 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:45:22.250154 kubelet[3184]: I0912 17:45:22.250133 3184 server.go:317] "Adding debug handlers to kubelet server" Sep 12 17:45:22.254094 kubelet[3184]: I0912 17:45:22.254025 3184 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:45:22.254328 kubelet[3184]: I0912 17:45:22.254264 3184 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:45:22.254502 kubelet[3184]: I0912 17:45:22.254463 3184 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:45:22.255910 kubelet[3184]: I0912 17:45:22.255871 3184 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:45:22.257491 kubelet[3184]: I0912 17:45:22.257465 3184 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:45:22.257616 kubelet[3184]: I0912 17:45:22.257589 3184 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:45:22.259827 kubelet[3184]: I0912 17:45:22.259794 3184 factory.go:223] Registration of the systemd container factory successfully Sep 12 17:45:22.260162 kubelet[3184]: I0912 17:45:22.259954 3184 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:45:22.261629 kubelet[3184]: I0912 17:45:22.261600 3184 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 17:45:22.263387 kubelet[3184]: I0912 17:45:22.263186 3184 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 17:45:22.263387 kubelet[3184]: I0912 17:45:22.263210 3184 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 17:45:22.263781 kubelet[3184]: I0912 17:45:22.263498 3184 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:45:22.263781 kubelet[3184]: I0912 17:45:22.263518 3184 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 17:45:22.263781 kubelet[3184]: E0912 17:45:22.263564 3184 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:45:22.269380 kubelet[3184]: E0912 17:45:22.269181 3184 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:45:22.269594 kubelet[3184]: I0912 17:45:22.269566 3184 factory.go:223] Registration of the containerd container factory successfully Sep 12 17:45:22.324109 kubelet[3184]: I0912 17:45:22.324078 3184 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:45:22.324109 kubelet[3184]: I0912 17:45:22.324100 3184 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:45:22.324288 kubelet[3184]: I0912 17:45:22.324122 3184 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:45:22.324288 kubelet[3184]: I0912 17:45:22.324271 3184 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:45:22.324345 kubelet[3184]: I0912 17:45:22.324281 3184 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:45:22.324345 kubelet[3184]: I0912 17:45:22.324298 3184 policy_none.go:49] "None policy: Start" Sep 12 17:45:22.324345 kubelet[3184]: I0912 17:45:22.324309 3184 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:45:22.324345 kubelet[3184]: I0912 17:45:22.324319 3184 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:45:22.324454 kubelet[3184]: I0912 17:45:22.324436 3184 state_mem.go:75] "Updated machine memory state" Sep 12 17:45:22.329371 kubelet[3184]: E0912 17:45:22.328657 3184 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 17:45:22.329371 kubelet[3184]: I0912 17:45:22.328856 3184 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:45:22.329371 kubelet[3184]: I0912 17:45:22.328868 3184 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:45:22.329371 kubelet[3184]: I0912 17:45:22.329148 3184 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:45:22.332065 kubelet[3184]: E0912 17:45:22.332039 3184 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:45:22.364772 kubelet[3184]: I0912 17:45:22.364465 3184 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:22.364772 kubelet[3184]: I0912 17:45:22.364508 3184 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:22.364939 kubelet[3184]: I0912 17:45:22.364783 3184 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:22.375586 kubelet[3184]: I0912 17:45:22.375560 3184 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 17:45:22.386053 kubelet[3184]: I0912 17:45:22.385648 3184 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 17:45:22.386283 kubelet[3184]: I0912 17:45:22.386269 3184 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 17:45:22.441321 kubelet[3184]: I0912 17:45:22.440357 3184 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:22.460091 kubelet[3184]: I0912 17:45:22.459769 3184 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:22.460091 kubelet[3184]: I0912 17:45:22.459858 3184 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:22.558936 kubelet[3184]: I0912 17:45:22.558763 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5336d4e16fa1531aaad4cbe14e2d4b4b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-a-ca65cd0ccc\" (UID: \"5336d4e16fa1531aaad4cbe14e2d4b4b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:22.558936 kubelet[3184]: I0912 17:45:22.558804 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5336d4e16fa1531aaad4cbe14e2d4b4b-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-a-ca65cd0ccc\" (UID: \"5336d4e16fa1531aaad4cbe14e2d4b4b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:22.558936 kubelet[3184]: I0912 17:45:22.558825 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5336d4e16fa1531aaad4cbe14e2d4b4b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-a-ca65cd0ccc\" (UID: \"5336d4e16fa1531aaad4cbe14e2d4b4b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:22.558936 kubelet[3184]: I0912 17:45:22.558843 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98f79bd3b5bcf99ad53cee5360026780-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-a-ca65cd0ccc\" (UID: \"98f79bd3b5bcf99ad53cee5360026780\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:22.558936 kubelet[3184]: I0912 17:45:22.558858 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5336d4e16fa1531aaad4cbe14e2d4b4b-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-a-ca65cd0ccc\" (UID: \"5336d4e16fa1531aaad4cbe14e2d4b4b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:22.559645 kubelet[3184]: I0912 17:45:22.558874 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5336d4e16fa1531aaad4cbe14e2d4b4b-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-a-ca65cd0ccc\" (UID: \"5336d4e16fa1531aaad4cbe14e2d4b4b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:22.559645 kubelet[3184]: I0912 17:45:22.558889 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c8c98de8ccca5c9cb5eb5e1754a9139-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-a-ca65cd0ccc\" (UID: \"8c8c98de8ccca5c9cb5eb5e1754a9139\") " pod="kube-system/kube-scheduler-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:22.559645 kubelet[3184]: I0912 17:45:22.558903 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98f79bd3b5bcf99ad53cee5360026780-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-a-ca65cd0ccc\" (UID: \"98f79bd3b5bcf99ad53cee5360026780\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:22.559645 kubelet[3184]: I0912 17:45:22.558950 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98f79bd3b5bcf99ad53cee5360026780-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-a-ca65cd0ccc\" (UID: \"98f79bd3b5bcf99ad53cee5360026780\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:23.213773 kubelet[3184]: I0912 17:45:23.213737 3184 apiserver.go:52] "Watching apiserver" Sep 12 17:45:23.257999 kubelet[3184]: I0912 17:45:23.257954 3184 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:45:23.307074 kubelet[3184]: I0912 17:45:23.306710 3184 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:23.323866 kubelet[3184]: I0912 17:45:23.323576 3184 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 12 17:45:23.323866 kubelet[3184]: E0912 17:45:23.323635 3184 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-a-ca65cd0ccc\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:45:23.343418 kubelet[3184]: I0912 17:45:23.343348 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-a-ca65cd0ccc" podStartSLOduration=1.343329215 podStartE2EDuration="1.343329215s" podCreationTimestamp="2025-09-12 17:45:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:45:23.342173294 +0000 UTC m=+1.197801841" watchObservedRunningTime="2025-09-12 17:45:23.343329215 +0000 UTC m=+1.198957682" Sep 12 17:45:23.378318 kubelet[3184]: I0912 17:45:23.378257 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-a-ca65cd0ccc" podStartSLOduration=1.378227391 podStartE2EDuration="1.378227391s" podCreationTimestamp="2025-09-12 17:45:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:45:23.356049221 +0000 UTC m=+1.211677688" watchObservedRunningTime="2025-09-12 17:45:23.378227391 +0000 UTC m=+1.233855858" Sep 12 17:45:23.378502 kubelet[3184]: I0912 17:45:23.378377 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-a-ca65cd0ccc" podStartSLOduration=1.378372591 podStartE2EDuration="1.378372591s" podCreationTimestamp="2025-09-12 17:45:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:45:23.377727671 +0000 UTC m=+1.233356138" watchObservedRunningTime="2025-09-12 17:45:23.378372591 +0000 UTC m=+1.234001098" Sep 12 17:45:27.214170 kubelet[3184]: I0912 17:45:27.214015 3184 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:45:27.214688 kubelet[3184]: I0912 17:45:27.214509 3184 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:45:27.214724 containerd[1714]: time="2025-09-12T17:45:27.214328573Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:45:28.109016 systemd[1]: Created slice kubepods-besteffort-poda928c0bb_5d74_49ca_9bc5_18476c15fad0.slice - libcontainer container kubepods-besteffort-poda928c0bb_5d74_49ca_9bc5_18476c15fad0.slice. Sep 12 17:45:28.193483 kubelet[3184]: I0912 17:45:28.193431 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a928c0bb-5d74-49ca-9bc5-18476c15fad0-kube-proxy\") pod \"kube-proxy-jz8kj\" (UID: \"a928c0bb-5d74-49ca-9bc5-18476c15fad0\") " pod="kube-system/kube-proxy-jz8kj" Sep 12 17:45:28.193483 kubelet[3184]: I0912 17:45:28.193479 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a928c0bb-5d74-49ca-9bc5-18476c15fad0-xtables-lock\") pod \"kube-proxy-jz8kj\" (UID: \"a928c0bb-5d74-49ca-9bc5-18476c15fad0\") " pod="kube-system/kube-proxy-jz8kj" Sep 12 17:45:28.193651 kubelet[3184]: I0912 17:45:28.193497 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a928c0bb-5d74-49ca-9bc5-18476c15fad0-lib-modules\") pod \"kube-proxy-jz8kj\" (UID: \"a928c0bb-5d74-49ca-9bc5-18476c15fad0\") " pod="kube-system/kube-proxy-jz8kj" Sep 12 17:45:28.193651 kubelet[3184]: I0912 17:45:28.193511 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtzhz\" (UniqueName: \"kubernetes.io/projected/a928c0bb-5d74-49ca-9bc5-18476c15fad0-kube-api-access-gtzhz\") pod \"kube-proxy-jz8kj\" (UID: \"a928c0bb-5d74-49ca-9bc5-18476c15fad0\") " pod="kube-system/kube-proxy-jz8kj" Sep 12 17:45:28.410901 systemd[1]: Created slice kubepods-besteffort-pod8b40f3ed_e4a2_44d5_9ca0_2b00fb65a081.slice - libcontainer container kubepods-besteffort-pod8b40f3ed_e4a2_44d5_9ca0_2b00fb65a081.slice. Sep 12 17:45:28.419433 containerd[1714]: time="2025-09-12T17:45:28.419158235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jz8kj,Uid:a928c0bb-5d74-49ca-9bc5-18476c15fad0,Namespace:kube-system,Attempt:0,}" Sep 12 17:45:28.468538 containerd[1714]: time="2025-09-12T17:45:28.468007795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:45:28.468538 containerd[1714]: time="2025-09-12T17:45:28.468070075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:45:28.468538 containerd[1714]: time="2025-09-12T17:45:28.468081995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:45:28.468728 containerd[1714]: time="2025-09-12T17:45:28.468176315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:45:28.487492 systemd[1]: Started cri-containerd-e9f5601400da2cb2d525d10083aebe4f3f4512eb97c932ecf569192e59f5888e.scope - libcontainer container e9f5601400da2cb2d525d10083aebe4f3f4512eb97c932ecf569192e59f5888e. Sep 12 17:45:28.495461 kubelet[3184]: I0912 17:45:28.495361 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sltj8\" (UniqueName: \"kubernetes.io/projected/8b40f3ed-e4a2-44d5-9ca0-2b00fb65a081-kube-api-access-sltj8\") pod \"tigera-operator-755d956888-5rxv4\" (UID: \"8b40f3ed-e4a2-44d5-9ca0-2b00fb65a081\") " pod="tigera-operator/tigera-operator-755d956888-5rxv4" Sep 12 17:45:28.495461 kubelet[3184]: I0912 17:45:28.495406 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8b40f3ed-e4a2-44d5-9ca0-2b00fb65a081-var-lib-calico\") pod \"tigera-operator-755d956888-5rxv4\" (UID: \"8b40f3ed-e4a2-44d5-9ca0-2b00fb65a081\") " pod="tigera-operator/tigera-operator-755d956888-5rxv4" Sep 12 17:45:28.507514 containerd[1714]: time="2025-09-12T17:45:28.507469547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jz8kj,Uid:a928c0bb-5d74-49ca-9bc5-18476c15fad0,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9f5601400da2cb2d525d10083aebe4f3f4512eb97c932ecf569192e59f5888e\"" Sep 12 17:45:28.519264 containerd[1714]: time="2025-09-12T17:45:28.518769516Z" level=info msg="CreateContainer within sandbox \"e9f5601400da2cb2d525d10083aebe4f3f4512eb97c932ecf569192e59f5888e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:45:28.569557 containerd[1714]: time="2025-09-12T17:45:28.569505278Z" level=info msg="CreateContainer within sandbox \"e9f5601400da2cb2d525d10083aebe4f3f4512eb97c932ecf569192e59f5888e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7b09bbb025af5862c81bc3e41d1d4df36b0041495813945ded0b0ffbc9d9fdcd\"" Sep 12 17:45:28.570776 containerd[1714]: time="2025-09-12T17:45:28.570222718Z" level=info msg="StartContainer for \"7b09bbb025af5862c81bc3e41d1d4df36b0041495813945ded0b0ffbc9d9fdcd\"" Sep 12 17:45:28.597577 systemd[1]: Started cri-containerd-7b09bbb025af5862c81bc3e41d1d4df36b0041495813945ded0b0ffbc9d9fdcd.scope - libcontainer container 7b09bbb025af5862c81bc3e41d1d4df36b0041495813945ded0b0ffbc9d9fdcd. Sep 12 17:45:28.635910 containerd[1714]: time="2025-09-12T17:45:28.635860772Z" level=info msg="StartContainer for \"7b09bbb025af5862c81bc3e41d1d4df36b0041495813945ded0b0ffbc9d9fdcd\" returns successfully" Sep 12 17:45:28.718015 containerd[1714]: time="2025-09-12T17:45:28.717910479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-5rxv4,Uid:8b40f3ed-e4a2-44d5-9ca0-2b00fb65a081,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:45:28.758046 containerd[1714]: time="2025-09-12T17:45:28.757869231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:45:28.758046 containerd[1714]: time="2025-09-12T17:45:28.757978511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:45:28.758046 containerd[1714]: time="2025-09-12T17:45:28.757999831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:45:28.758441 containerd[1714]: time="2025-09-12T17:45:28.758115432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:45:28.772621 systemd[1]: Started cri-containerd-39418a9d85a1447dd4ee4ae7d240c93c9a0f574cb1cd8f9c5b059d37c36bf554.scope - libcontainer container 39418a9d85a1447dd4ee4ae7d240c93c9a0f574cb1cd8f9c5b059d37c36bf554. Sep 12 17:45:28.814383 containerd[1714]: time="2025-09-12T17:45:28.814257317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-5rxv4,Uid:8b40f3ed-e4a2-44d5-9ca0-2b00fb65a081,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"39418a9d85a1447dd4ee4ae7d240c93c9a0f574cb1cd8f9c5b059d37c36bf554\"" Sep 12 17:45:28.816451 containerd[1714]: time="2025-09-12T17:45:28.816277719Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:45:29.308078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount568897956.mount: Deactivated successfully. Sep 12 17:45:29.331066 kubelet[3184]: I0912 17:45:29.331008 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jz8kj" podStartSLOduration=1.330981617 podStartE2EDuration="1.330981617s" podCreationTimestamp="2025-09-12 17:45:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:45:29.330093936 +0000 UTC m=+7.185722403" watchObservedRunningTime="2025-09-12 17:45:29.330981617 +0000 UTC m=+7.186610084" Sep 12 17:45:30.631543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3705100214.mount: Deactivated successfully. Sep 12 17:45:31.362043 containerd[1714]: time="2025-09-12T17:45:31.361276866Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:45:31.365066 containerd[1714]: time="2025-09-12T17:45:31.365012069Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 12 17:45:31.368529 containerd[1714]: time="2025-09-12T17:45:31.368484912Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:45:31.375103 containerd[1714]: time="2025-09-12T17:45:31.374170796Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:45:31.375103 containerd[1714]: time="2025-09-12T17:45:31.374957717Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.558624598s" Sep 12 17:45:31.375103 containerd[1714]: time="2025-09-12T17:45:31.374986557Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 12 17:45:31.383897 containerd[1714]: time="2025-09-12T17:45:31.383848444Z" level=info msg="CreateContainer within sandbox \"39418a9d85a1447dd4ee4ae7d240c93c9a0f574cb1cd8f9c5b059d37c36bf554\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:45:31.408452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1929349326.mount: Deactivated successfully. Sep 12 17:45:31.421145 containerd[1714]: time="2025-09-12T17:45:31.421100555Z" level=info msg="CreateContainer within sandbox \"39418a9d85a1447dd4ee4ae7d240c93c9a0f574cb1cd8f9c5b059d37c36bf554\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5eadcdca454e1c8d22d165569c08eefe43befc5fff5de75d79e429fd8641e4bf\"" Sep 12 17:45:31.423227 containerd[1714]: time="2025-09-12T17:45:31.423190156Z" level=info msg="StartContainer for \"5eadcdca454e1c8d22d165569c08eefe43befc5fff5de75d79e429fd8641e4bf\"" Sep 12 17:45:31.447591 systemd[1]: Started cri-containerd-5eadcdca454e1c8d22d165569c08eefe43befc5fff5de75d79e429fd8641e4bf.scope - libcontainer container 5eadcdca454e1c8d22d165569c08eefe43befc5fff5de75d79e429fd8641e4bf. Sep 12 17:45:31.476214 containerd[1714]: time="2025-09-12T17:45:31.475655039Z" level=info msg="StartContainer for \"5eadcdca454e1c8d22d165569c08eefe43befc5fff5de75d79e429fd8641e4bf\" returns successfully" Sep 12 17:45:35.977342 kubelet[3184]: I0912 17:45:35.977025 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-5rxv4" podStartSLOduration=5.416869895 podStartE2EDuration="7.977009095s" podCreationTimestamp="2025-09-12 17:45:28 +0000 UTC" firstStartedPulling="2025-09-12 17:45:28.815709438 +0000 UTC m=+6.671337905" lastFinishedPulling="2025-09-12 17:45:31.375848678 +0000 UTC m=+9.231477105" observedRunningTime="2025-09-12 17:45:32.334293616 +0000 UTC m=+10.189922083" watchObservedRunningTime="2025-09-12 17:45:35.977009095 +0000 UTC m=+13.832637562" Sep 12 17:45:37.518454 sudo[2216]: pam_unix(sudo:session): session closed for user root Sep 12 17:45:37.603327 sshd[2213]: pam_unix(sshd:session): session closed for user core Sep 12 17:45:37.607093 systemd[1]: sshd@6-10.200.20.46:22-10.200.16.10:32896.service: Deactivated successfully. Sep 12 17:45:37.612509 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:45:37.612681 systemd[1]: session-9.scope: Consumed 7.995s CPU time, 152.0M memory peak, 0B memory swap peak. Sep 12 17:45:37.614471 systemd-logind[1692]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:45:37.617475 systemd-logind[1692]: Removed session 9. Sep 12 17:45:45.775279 systemd[1]: Created slice kubepods-besteffort-pod1435b4a2_4716_45ea_943e_34e41bf7e443.slice - libcontainer container kubepods-besteffort-pod1435b4a2_4716_45ea_943e_34e41bf7e443.slice. Sep 12 17:45:45.808669 kubelet[3184]: I0912 17:45:45.807662 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1435b4a2-4716-45ea-943e-34e41bf7e443-tigera-ca-bundle\") pod \"calico-typha-5b9f87d97f-hxpkv\" (UID: \"1435b4a2-4716-45ea-943e-34e41bf7e443\") " pod="calico-system/calico-typha-5b9f87d97f-hxpkv" Sep 12 17:45:45.808669 kubelet[3184]: I0912 17:45:45.807702 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmlnz\" (UniqueName: \"kubernetes.io/projected/1435b4a2-4716-45ea-943e-34e41bf7e443-kube-api-access-kmlnz\") pod \"calico-typha-5b9f87d97f-hxpkv\" (UID: \"1435b4a2-4716-45ea-943e-34e41bf7e443\") " pod="calico-system/calico-typha-5b9f87d97f-hxpkv" Sep 12 17:45:45.808669 kubelet[3184]: I0912 17:45:45.807720 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1435b4a2-4716-45ea-943e-34e41bf7e443-typha-certs\") pod \"calico-typha-5b9f87d97f-hxpkv\" (UID: \"1435b4a2-4716-45ea-943e-34e41bf7e443\") " pod="calico-system/calico-typha-5b9f87d97f-hxpkv" Sep 12 17:45:45.961988 systemd[1]: Created slice kubepods-besteffort-pod599b5021_0227_45c9_9605_9d65789980f2.slice - libcontainer container kubepods-besteffort-pod599b5021_0227_45c9_9605_9d65789980f2.slice. Sep 12 17:45:46.009260 kubelet[3184]: I0912 17:45:46.009192 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/599b5021-0227-45c9-9605-9d65789980f2-tigera-ca-bundle\") pod \"calico-node-sw4tx\" (UID: \"599b5021-0227-45c9-9605-9d65789980f2\") " pod="calico-system/calico-node-sw4tx" Sep 12 17:45:46.009395 kubelet[3184]: I0912 17:45:46.009248 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/599b5021-0227-45c9-9605-9d65789980f2-cni-log-dir\") pod \"calico-node-sw4tx\" (UID: \"599b5021-0227-45c9-9605-9d65789980f2\") " pod="calico-system/calico-node-sw4tx" Sep 12 17:45:46.009395 kubelet[3184]: I0912 17:45:46.009319 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/599b5021-0227-45c9-9605-9d65789980f2-policysync\") pod \"calico-node-sw4tx\" (UID: \"599b5021-0227-45c9-9605-9d65789980f2\") " pod="calico-system/calico-node-sw4tx" Sep 12 17:45:46.009395 kubelet[3184]: I0912 17:45:46.009336 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/599b5021-0227-45c9-9605-9d65789980f2-cni-net-dir\") pod \"calico-node-sw4tx\" (UID: \"599b5021-0227-45c9-9605-9d65789980f2\") " pod="calico-system/calico-node-sw4tx" Sep 12 17:45:46.009395 kubelet[3184]: I0912 17:45:46.009388 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/599b5021-0227-45c9-9605-9d65789980f2-flexvol-driver-host\") pod \"calico-node-sw4tx\" (UID: \"599b5021-0227-45c9-9605-9d65789980f2\") " pod="calico-system/calico-node-sw4tx" Sep 12 17:45:46.009494 kubelet[3184]: I0912 17:45:46.009408 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/599b5021-0227-45c9-9605-9d65789980f2-xtables-lock\") pod \"calico-node-sw4tx\" (UID: \"599b5021-0227-45c9-9605-9d65789980f2\") " pod="calico-system/calico-node-sw4tx" Sep 12 17:45:46.009494 kubelet[3184]: I0912 17:45:46.009435 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/599b5021-0227-45c9-9605-9d65789980f2-node-certs\") pod \"calico-node-sw4tx\" (UID: \"599b5021-0227-45c9-9605-9d65789980f2\") " pod="calico-system/calico-node-sw4tx" Sep 12 17:45:46.009494 kubelet[3184]: I0912 17:45:46.009450 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/599b5021-0227-45c9-9605-9d65789980f2-var-lib-calico\") pod \"calico-node-sw4tx\" (UID: \"599b5021-0227-45c9-9605-9d65789980f2\") " pod="calico-system/calico-node-sw4tx" Sep 12 17:45:46.009494 kubelet[3184]: I0912 17:45:46.009464 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/599b5021-0227-45c9-9605-9d65789980f2-var-run-calico\") pod \"calico-node-sw4tx\" (UID: \"599b5021-0227-45c9-9605-9d65789980f2\") " pod="calico-system/calico-node-sw4tx" Sep 12 17:45:46.009494 kubelet[3184]: I0912 17:45:46.009480 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/599b5021-0227-45c9-9605-9d65789980f2-cni-bin-dir\") pod \"calico-node-sw4tx\" (UID: \"599b5021-0227-45c9-9605-9d65789980f2\") " pod="calico-system/calico-node-sw4tx" Sep 12 17:45:46.009595 kubelet[3184]: I0912 17:45:46.009495 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/599b5021-0227-45c9-9605-9d65789980f2-lib-modules\") pod \"calico-node-sw4tx\" (UID: \"599b5021-0227-45c9-9605-9d65789980f2\") " pod="calico-system/calico-node-sw4tx" Sep 12 17:45:46.009595 kubelet[3184]: I0912 17:45:46.009521 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lls9q\" (UniqueName: \"kubernetes.io/projected/599b5021-0227-45c9-9605-9d65789980f2-kube-api-access-lls9q\") pod \"calico-node-sw4tx\" (UID: \"599b5021-0227-45c9-9605-9d65789980f2\") " pod="calico-system/calico-node-sw4tx" Sep 12 17:45:46.081536 containerd[1714]: time="2025-09-12T17:45:46.080490375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b9f87d97f-hxpkv,Uid:1435b4a2-4716-45ea-943e-34e41bf7e443,Namespace:calico-system,Attempt:0,}" Sep 12 17:45:46.112812 kubelet[3184]: E0912 17:45:46.112660 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.112812 kubelet[3184]: W0912 17:45:46.112682 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.112812 kubelet[3184]: E0912 17:45:46.112706 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.114421 kubelet[3184]: E0912 17:45:46.114310 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.114421 kubelet[3184]: W0912 17:45:46.114326 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.114421 kubelet[3184]: E0912 17:45:46.114341 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.117406 kubelet[3184]: E0912 17:45:46.117361 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.117406 kubelet[3184]: W0912 17:45:46.117377 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.117406 kubelet[3184]: E0912 17:45:46.117391 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.121306 kubelet[3184]: E0912 17:45:46.118284 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.121306 kubelet[3184]: W0912 17:45:46.118300 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.121306 kubelet[3184]: E0912 17:45:46.118314 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.121925 kubelet[3184]: E0912 17:45:46.121763 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.121925 kubelet[3184]: W0912 17:45:46.121780 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.121925 kubelet[3184]: E0912 17:45:46.121794 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.123265 kubelet[3184]: E0912 17:45:46.122096 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.123265 kubelet[3184]: W0912 17:45:46.122108 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.123265 kubelet[3184]: E0912 17:45:46.122119 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.123764 kubelet[3184]: E0912 17:45:46.123668 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.123764 kubelet[3184]: W0912 17:45:46.123682 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.123764 kubelet[3184]: E0912 17:45:46.123694 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.124188 kubelet[3184]: E0912 17:45:46.124063 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.124188 kubelet[3184]: W0912 17:45:46.124077 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.124188 kubelet[3184]: E0912 17:45:46.124089 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.124458 kubelet[3184]: E0912 17:45:46.124387 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.124458 kubelet[3184]: W0912 17:45:46.124399 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.124458 kubelet[3184]: E0912 17:45:46.124412 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.125917 kubelet[3184]: E0912 17:45:46.125411 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.125917 kubelet[3184]: W0912 17:45:46.125426 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.125917 kubelet[3184]: E0912 17:45:46.125440 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.125917 kubelet[3184]: E0912 17:45:46.125861 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.125917 kubelet[3184]: W0912 17:45:46.125873 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.125917 kubelet[3184]: E0912 17:45:46.125884 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.126094 kubelet[3184]: E0912 17:45:46.126041 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.126094 kubelet[3184]: W0912 17:45:46.126049 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.126094 kubelet[3184]: E0912 17:45:46.126058 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.126837 kubelet[3184]: E0912 17:45:46.126202 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.126837 kubelet[3184]: W0912 17:45:46.126209 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.126837 kubelet[3184]: E0912 17:45:46.126216 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.126837 kubelet[3184]: E0912 17:45:46.126489 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.126837 kubelet[3184]: W0912 17:45:46.126497 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.126837 kubelet[3184]: E0912 17:45:46.126506 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.131446 kubelet[3184]: E0912 17:45:46.130645 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.131446 kubelet[3184]: W0912 17:45:46.130662 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.131446 kubelet[3184]: E0912 17:45:46.130674 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.131446 kubelet[3184]: E0912 17:45:46.131395 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.131446 kubelet[3184]: W0912 17:45:46.131408 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.131446 kubelet[3184]: E0912 17:45:46.131419 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.133394 kubelet[3184]: E0912 17:45:46.133340 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.133394 kubelet[3184]: W0912 17:45:46.133357 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.133394 kubelet[3184]: E0912 17:45:46.133372 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.147785 kubelet[3184]: E0912 17:45:46.147472 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jjztm" podUID="64c67752-89ee-4f26-b63e-b37e41be4790" Sep 12 17:45:46.149805 containerd[1714]: time="2025-09-12T17:45:46.149643192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:45:46.150797 containerd[1714]: time="2025-09-12T17:45:46.150546273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:45:46.151491 containerd[1714]: time="2025-09-12T17:45:46.151311553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:45:46.152195 containerd[1714]: time="2025-09-12T17:45:46.151506794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:45:46.168254 kubelet[3184]: E0912 17:45:46.167315 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.168254 kubelet[3184]: W0912 17:45:46.167345 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.168254 kubelet[3184]: E0912 17:45:46.167364 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.183384 systemd[1]: Started cri-containerd-963f27b87e55be7bcbc16e314d477ccc0e0f26b205c951e329e0d292cabb1703.scope - libcontainer container 963f27b87e55be7bcbc16e314d477ccc0e0f26b205c951e329e0d292cabb1703. Sep 12 17:45:46.200680 kubelet[3184]: E0912 17:45:46.200649 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.203362 kubelet[3184]: W0912 17:45:46.200672 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.203362 kubelet[3184]: E0912 17:45:46.200731 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.203362 kubelet[3184]: E0912 17:45:46.200952 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.203362 kubelet[3184]: W0912 17:45:46.200961 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.203362 kubelet[3184]: E0912 17:45:46.201013 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.203362 kubelet[3184]: E0912 17:45:46.201192 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.203362 kubelet[3184]: W0912 17:45:46.201199 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.203362 kubelet[3184]: E0912 17:45:46.201208 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.203362 kubelet[3184]: E0912 17:45:46.201440 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.203362 kubelet[3184]: W0912 17:45:46.201450 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.203583 kubelet[3184]: E0912 17:45:46.201462 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.203583 kubelet[3184]: E0912 17:45:46.202495 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.203583 kubelet[3184]: W0912 17:45:46.202504 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.203583 kubelet[3184]: E0912 17:45:46.202515 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.203583 kubelet[3184]: E0912 17:45:46.203334 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.203583 kubelet[3184]: W0912 17:45:46.203346 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.203583 kubelet[3184]: E0912 17:45:46.203369 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.203583 kubelet[3184]: E0912 17:45:46.203554 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.203583 kubelet[3184]: W0912 17:45:46.203562 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.203583 kubelet[3184]: E0912 17:45:46.203571 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.204478 kubelet[3184]: E0912 17:45:46.204451 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.204478 kubelet[3184]: W0912 17:45:46.204472 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.204583 kubelet[3184]: E0912 17:45:46.204485 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.204905 kubelet[3184]: E0912 17:45:46.204866 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.204905 kubelet[3184]: W0912 17:45:46.204882 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.205272 kubelet[3184]: E0912 17:45:46.204894 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.206175 kubelet[3184]: E0912 17:45:46.206150 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.206175 kubelet[3184]: W0912 17:45:46.206169 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.206289 kubelet[3184]: E0912 17:45:46.206181 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.206598 kubelet[3184]: E0912 17:45:46.206577 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.206598 kubelet[3184]: W0912 17:45:46.206594 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.206663 kubelet[3184]: E0912 17:45:46.206605 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.208259 kubelet[3184]: E0912 17:45:46.206943 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.208259 kubelet[3184]: W0912 17:45:46.206960 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.208259 kubelet[3184]: E0912 17:45:46.206972 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.208259 kubelet[3184]: E0912 17:45:46.207118 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.208259 kubelet[3184]: W0912 17:45:46.207125 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.208259 kubelet[3184]: E0912 17:45:46.207132 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.208259 kubelet[3184]: E0912 17:45:46.207303 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.208259 kubelet[3184]: W0912 17:45:46.207311 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.208259 kubelet[3184]: E0912 17:45:46.207319 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.208259 kubelet[3184]: E0912 17:45:46.207445 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.208541 kubelet[3184]: W0912 17:45:46.207451 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.208541 kubelet[3184]: E0912 17:45:46.207460 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.208541 kubelet[3184]: E0912 17:45:46.207579 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.208541 kubelet[3184]: W0912 17:45:46.207585 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.208541 kubelet[3184]: E0912 17:45:46.207592 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.208541 kubelet[3184]: E0912 17:45:46.207724 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.208541 kubelet[3184]: W0912 17:45:46.207731 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.208541 kubelet[3184]: E0912 17:45:46.207739 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.208541 kubelet[3184]: E0912 17:45:46.207871 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.208541 kubelet[3184]: W0912 17:45:46.207878 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.208734 kubelet[3184]: E0912 17:45:46.207887 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.208734 kubelet[3184]: E0912 17:45:46.208008 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.208734 kubelet[3184]: W0912 17:45:46.208016 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.208734 kubelet[3184]: E0912 17:45:46.208023 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.208734 kubelet[3184]: E0912 17:45:46.208308 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.208734 kubelet[3184]: W0912 17:45:46.208318 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.208734 kubelet[3184]: E0912 17:45:46.208328 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.212711 kubelet[3184]: E0912 17:45:46.212682 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.212804 kubelet[3184]: W0912 17:45:46.212724 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.212804 kubelet[3184]: E0912 17:45:46.212736 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.212804 kubelet[3184]: I0912 17:45:46.212764 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/64c67752-89ee-4f26-b63e-b37e41be4790-registration-dir\") pod \"csi-node-driver-jjztm\" (UID: \"64c67752-89ee-4f26-b63e-b37e41be4790\") " pod="calico-system/csi-node-driver-jjztm" Sep 12 17:45:46.213073 kubelet[3184]: E0912 17:45:46.213052 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.213073 kubelet[3184]: W0912 17:45:46.213068 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.213201 kubelet[3184]: E0912 17:45:46.213080 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.213201 kubelet[3184]: I0912 17:45:46.213132 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/64c67752-89ee-4f26-b63e-b37e41be4790-socket-dir\") pod \"csi-node-driver-jjztm\" (UID: \"64c67752-89ee-4f26-b63e-b37e41be4790\") " pod="calico-system/csi-node-driver-jjztm" Sep 12 17:45:46.214291 kubelet[3184]: E0912 17:45:46.213409 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.214291 kubelet[3184]: W0912 17:45:46.213437 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.214291 kubelet[3184]: E0912 17:45:46.213449 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.214291 kubelet[3184]: I0912 17:45:46.213472 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/64c67752-89ee-4f26-b63e-b37e41be4790-varrun\") pod \"csi-node-driver-jjztm\" (UID: \"64c67752-89ee-4f26-b63e-b37e41be4790\") " pod="calico-system/csi-node-driver-jjztm" Sep 12 17:45:46.214291 kubelet[3184]: E0912 17:45:46.213760 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.214291 kubelet[3184]: W0912 17:45:46.213771 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.214291 kubelet[3184]: E0912 17:45:46.213798 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.214291 kubelet[3184]: I0912 17:45:46.213820 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/64c67752-89ee-4f26-b63e-b37e41be4790-kubelet-dir\") pod \"csi-node-driver-jjztm\" (UID: \"64c67752-89ee-4f26-b63e-b37e41be4790\") " pod="calico-system/csi-node-driver-jjztm" Sep 12 17:45:46.214291 kubelet[3184]: E0912 17:45:46.214086 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.214536 kubelet[3184]: W0912 17:45:46.214129 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.214536 kubelet[3184]: E0912 17:45:46.214140 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.214536 kubelet[3184]: I0912 17:45:46.214155 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7p5m\" (UniqueName: \"kubernetes.io/projected/64c67752-89ee-4f26-b63e-b37e41be4790-kube-api-access-c7p5m\") pod \"csi-node-driver-jjztm\" (UID: \"64c67752-89ee-4f26-b63e-b37e41be4790\") " pod="calico-system/csi-node-driver-jjztm" Sep 12 17:45:46.214536 kubelet[3184]: E0912 17:45:46.214455 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.214536 kubelet[3184]: W0912 17:45:46.214465 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.214536 kubelet[3184]: E0912 17:45:46.214476 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.217674 kubelet[3184]: E0912 17:45:46.214663 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.217674 kubelet[3184]: W0912 17:45:46.214677 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.217674 kubelet[3184]: E0912 17:45:46.214686 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.217674 kubelet[3184]: E0912 17:45:46.215108 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.217674 kubelet[3184]: W0912 17:45:46.215121 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.217674 kubelet[3184]: E0912 17:45:46.215130 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.217674 kubelet[3184]: E0912 17:45:46.215549 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.217674 kubelet[3184]: W0912 17:45:46.215577 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.217674 kubelet[3184]: E0912 17:45:46.215592 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.217674 kubelet[3184]: E0912 17:45:46.215772 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.218005 kubelet[3184]: W0912 17:45:46.215781 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.218005 kubelet[3184]: E0912 17:45:46.215790 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.218005 kubelet[3184]: E0912 17:45:46.215998 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.218005 kubelet[3184]: W0912 17:45:46.216007 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.218005 kubelet[3184]: E0912 17:45:46.216015 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.218005 kubelet[3184]: E0912 17:45:46.216323 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.218005 kubelet[3184]: W0912 17:45:46.216333 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.218005 kubelet[3184]: E0912 17:45:46.216342 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.218005 kubelet[3184]: E0912 17:45:46.216728 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.218005 kubelet[3184]: W0912 17:45:46.216739 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.218205 kubelet[3184]: E0912 17:45:46.216749 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.218205 kubelet[3184]: E0912 17:45:46.217419 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.218205 kubelet[3184]: W0912 17:45:46.217431 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.218205 kubelet[3184]: E0912 17:45:46.217441 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.218205 kubelet[3184]: E0912 17:45:46.217634 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.218205 kubelet[3184]: W0912 17:45:46.217642 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.218205 kubelet[3184]: E0912 17:45:46.217668 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.253911 containerd[1714]: time="2025-09-12T17:45:46.253820639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b9f87d97f-hxpkv,Uid:1435b4a2-4716-45ea-943e-34e41bf7e443,Namespace:calico-system,Attempt:0,} returns sandbox id \"963f27b87e55be7bcbc16e314d477ccc0e0f26b205c951e329e0d292cabb1703\"" Sep 12 17:45:46.258083 containerd[1714]: time="2025-09-12T17:45:46.256515041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:45:46.265534 containerd[1714]: time="2025-09-12T17:45:46.265492409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sw4tx,Uid:599b5021-0227-45c9-9605-9d65789980f2,Namespace:calico-system,Attempt:0,}" Sep 12 17:45:46.316047 kubelet[3184]: E0912 17:45:46.315367 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.316047 kubelet[3184]: W0912 17:45:46.315558 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.316047 kubelet[3184]: E0912 17:45:46.315685 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.316810 kubelet[3184]: E0912 17:45:46.316745 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.318169 kubelet[3184]: W0912 17:45:46.316762 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.318169 kubelet[3184]: E0912 17:45:46.318160 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.318893 kubelet[3184]: E0912 17:45:46.318861 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.318893 kubelet[3184]: W0912 17:45:46.318880 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.319159 kubelet[3184]: E0912 17:45:46.319129 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.319750 kubelet[3184]: E0912 17:45:46.319727 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.319750 kubelet[3184]: W0912 17:45:46.319743 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.319866 kubelet[3184]: E0912 17:45:46.319764 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.320460 kubelet[3184]: E0912 17:45:46.320434 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.320460 kubelet[3184]: W0912 17:45:46.320452 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.320561 kubelet[3184]: E0912 17:45:46.320467 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.321073 kubelet[3184]: E0912 17:45:46.321044 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.321925 kubelet[3184]: W0912 17:45:46.321886 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.322016 kubelet[3184]: E0912 17:45:46.321919 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.322700 kubelet[3184]: E0912 17:45:46.322549 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.322821 kubelet[3184]: W0912 17:45:46.322754 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.322960 kubelet[3184]: E0912 17:45:46.322935 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.323407 containerd[1714]: time="2025-09-12T17:45:46.322182496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:45:46.323407 containerd[1714]: time="2025-09-12T17:45:46.322246296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:45:46.323407 containerd[1714]: time="2025-09-12T17:45:46.322258176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:45:46.323856 containerd[1714]: time="2025-09-12T17:45:46.323613817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:45:46.324024 kubelet[3184]: E0912 17:45:46.323782 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.324024 kubelet[3184]: W0912 17:45:46.323794 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.324024 kubelet[3184]: E0912 17:45:46.323805 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.325354 kubelet[3184]: E0912 17:45:46.324413 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.325354 kubelet[3184]: W0912 17:45:46.324429 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.325354 kubelet[3184]: E0912 17:45:46.324439 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.325354 kubelet[3184]: E0912 17:45:46.324632 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.325354 kubelet[3184]: W0912 17:45:46.324640 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.325354 kubelet[3184]: E0912 17:45:46.324649 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.325354 kubelet[3184]: E0912 17:45:46.324768 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.325354 kubelet[3184]: W0912 17:45:46.324773 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.325354 kubelet[3184]: E0912 17:45:46.324781 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.325354 kubelet[3184]: E0912 17:45:46.324898 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.325605 kubelet[3184]: W0912 17:45:46.324904 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.325605 kubelet[3184]: E0912 17:45:46.324911 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.325605 kubelet[3184]: E0912 17:45:46.325091 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.325605 kubelet[3184]: W0912 17:45:46.325099 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.325605 kubelet[3184]: E0912 17:45:46.325107 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.325605 kubelet[3184]: E0912 17:45:46.325520 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.325605 kubelet[3184]: W0912 17:45:46.325529 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.325605 kubelet[3184]: E0912 17:45:46.325538 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.325767 kubelet[3184]: E0912 17:45:46.325694 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.325767 kubelet[3184]: W0912 17:45:46.325701 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.325767 kubelet[3184]: E0912 17:45:46.325708 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.325827 kubelet[3184]: E0912 17:45:46.325819 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.325827 kubelet[3184]: W0912 17:45:46.325825 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.325870 kubelet[3184]: E0912 17:45:46.325833 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.326101 kubelet[3184]: E0912 17:45:46.325955 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.326101 kubelet[3184]: W0912 17:45:46.325962 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.326101 kubelet[3184]: E0912 17:45:46.325969 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.326690 kubelet[3184]: E0912 17:45:46.326478 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.326690 kubelet[3184]: W0912 17:45:46.326601 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.326690 kubelet[3184]: E0912 17:45:46.326613 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.328201 kubelet[3184]: E0912 17:45:46.328169 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.328201 kubelet[3184]: W0912 17:45:46.328188 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.328201 kubelet[3184]: E0912 17:45:46.328202 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.329140 kubelet[3184]: E0912 17:45:46.329119 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.329140 kubelet[3184]: W0912 17:45:46.329137 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.329219 kubelet[3184]: E0912 17:45:46.329149 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.331422 kubelet[3184]: E0912 17:45:46.331361 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.331422 kubelet[3184]: W0912 17:45:46.331378 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.331422 kubelet[3184]: E0912 17:45:46.331391 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.332132 kubelet[3184]: E0912 17:45:46.331681 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.332132 kubelet[3184]: W0912 17:45:46.331696 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.332132 kubelet[3184]: E0912 17:45:46.331706 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.333565 kubelet[3184]: E0912 17:45:46.333369 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.333565 kubelet[3184]: W0912 17:45:46.333383 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.333565 kubelet[3184]: E0912 17:45:46.333394 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.333932 kubelet[3184]: E0912 17:45:46.333699 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.333932 kubelet[3184]: W0912 17:45:46.333716 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.333932 kubelet[3184]: E0912 17:45:46.333726 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.333932 kubelet[3184]: E0912 17:45:46.333929 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.334034 kubelet[3184]: W0912 17:45:46.333938 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.334034 kubelet[3184]: E0912 17:45:46.333949 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.347691 kubelet[3184]: E0912 17:45:46.347626 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:46.347691 kubelet[3184]: W0912 17:45:46.347643 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:46.347691 kubelet[3184]: E0912 17:45:46.347659 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:46.360430 systemd[1]: Started cri-containerd-51c99a352205b6bd2df25c83e03eccc168f1cb69836a5f89b6c92adb50fdb863.scope - libcontainer container 51c99a352205b6bd2df25c83e03eccc168f1cb69836a5f89b6c92adb50fdb863. Sep 12 17:45:46.403546 containerd[1714]: time="2025-09-12T17:45:46.403508763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sw4tx,Uid:599b5021-0227-45c9-9605-9d65789980f2,Namespace:calico-system,Attempt:0,} returns sandbox id \"51c99a352205b6bd2df25c83e03eccc168f1cb69836a5f89b6c92adb50fdb863\"" Sep 12 17:45:47.903882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3637300189.mount: Deactivated successfully. Sep 12 17:45:48.264520 kubelet[3184]: E0912 17:45:48.264311 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jjztm" podUID="64c67752-89ee-4f26-b63e-b37e41be4790" Sep 12 17:45:48.909286 containerd[1714]: time="2025-09-12T17:45:48.908888210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:45:48.912202 containerd[1714]: time="2025-09-12T17:45:48.912128892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 12 17:45:48.916210 containerd[1714]: time="2025-09-12T17:45:48.916157936Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:45:48.921565 containerd[1714]: time="2025-09-12T17:45:48.921507180Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:45:48.922188 containerd[1714]: time="2025-09-12T17:45:48.922145781Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.6655937s" Sep 12 17:45:48.922188 containerd[1714]: time="2025-09-12T17:45:48.922179621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 12 17:45:48.924653 containerd[1714]: time="2025-09-12T17:45:48.924441782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:45:48.947780 containerd[1714]: time="2025-09-12T17:45:48.947745522Z" level=info msg="CreateContainer within sandbox \"963f27b87e55be7bcbc16e314d477ccc0e0f26b205c951e329e0d292cabb1703\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:45:48.993338 containerd[1714]: time="2025-09-12T17:45:48.993201880Z" level=info msg="CreateContainer within sandbox \"963f27b87e55be7bcbc16e314d477ccc0e0f26b205c951e329e0d292cabb1703\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"14d49190a50806fc24746f7d45854b57f75e0f9349c0621ed4d9250e10dc33a6\"" Sep 12 17:45:48.993812 containerd[1714]: time="2025-09-12T17:45:48.993682720Z" level=info msg="StartContainer for \"14d49190a50806fc24746f7d45854b57f75e0f9349c0621ed4d9250e10dc33a6\"" Sep 12 17:45:49.022416 systemd[1]: Started cri-containerd-14d49190a50806fc24746f7d45854b57f75e0f9349c0621ed4d9250e10dc33a6.scope - libcontainer container 14d49190a50806fc24746f7d45854b57f75e0f9349c0621ed4d9250e10dc33a6. Sep 12 17:45:49.064224 containerd[1714]: time="2025-09-12T17:45:49.064043699Z" level=info msg="StartContainer for \"14d49190a50806fc24746f7d45854b57f75e0f9349c0621ed4d9250e10dc33a6\" returns successfully" Sep 12 17:45:49.408273 kubelet[3184]: I0912 17:45:49.408048 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5b9f87d97f-hxpkv" podStartSLOduration=1.740596004 podStartE2EDuration="4.408034665s" podCreationTimestamp="2025-09-12 17:45:45 +0000 UTC" firstStartedPulling="2025-09-12 17:45:46.256208081 +0000 UTC m=+24.111836508" lastFinishedPulling="2025-09-12 17:45:48.923646702 +0000 UTC m=+26.779275169" observedRunningTime="2025-09-12 17:45:49.407952545 +0000 UTC m=+27.263581012" watchObservedRunningTime="2025-09-12 17:45:49.408034665 +0000 UTC m=+27.263663092" Sep 12 17:45:49.432366 kubelet[3184]: E0912 17:45:49.432325 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.432366 kubelet[3184]: W0912 17:45:49.432357 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.432517 kubelet[3184]: E0912 17:45:49.432381 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:49.432607 kubelet[3184]: E0912 17:45:49.432585 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.432650 kubelet[3184]: W0912 17:45:49.432600 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.432650 kubelet[3184]: E0912 17:45:49.432642 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:49.432884 kubelet[3184]: E0912 17:45:49.432859 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.432884 kubelet[3184]: W0912 17:45:49.432877 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.432954 kubelet[3184]: E0912 17:45:49.432887 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:49.433078 kubelet[3184]: E0912 17:45:49.433052 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.433078 kubelet[3184]: W0912 17:45:49.433066 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.433078 kubelet[3184]: E0912 17:45:49.433075 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:49.433250 kubelet[3184]: E0912 17:45:49.433225 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.433290 kubelet[3184]: W0912 17:45:49.433254 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.433290 kubelet[3184]: E0912 17:45:49.433265 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:49.433419 kubelet[3184]: E0912 17:45:49.433400 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.433419 kubelet[3184]: W0912 17:45:49.433412 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.433419 kubelet[3184]: E0912 17:45:49.433420 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:49.433569 kubelet[3184]: E0912 17:45:49.433550 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.433569 kubelet[3184]: W0912 17:45:49.433562 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.433569 kubelet[3184]: E0912 17:45:49.433570 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:49.433719 kubelet[3184]: E0912 17:45:49.433699 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.433719 kubelet[3184]: W0912 17:45:49.433712 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.433719 kubelet[3184]: E0912 17:45:49.433720 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:49.433872 kubelet[3184]: E0912 17:45:49.433853 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.433872 kubelet[3184]: W0912 17:45:49.433866 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.433872 kubelet[3184]: E0912 17:45:49.433874 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:49.434009 kubelet[3184]: E0912 17:45:49.433991 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.434009 kubelet[3184]: W0912 17:45:49.434005 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.434075 kubelet[3184]: E0912 17:45:49.434012 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:49.434151 kubelet[3184]: E0912 17:45:49.434133 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.434151 kubelet[3184]: W0912 17:45:49.434145 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.434151 kubelet[3184]: E0912 17:45:49.434152 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:49.434343 kubelet[3184]: E0912 17:45:49.434316 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.434343 kubelet[3184]: W0912 17:45:49.434329 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.434343 kubelet[3184]: E0912 17:45:49.434338 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:49.434505 kubelet[3184]: E0912 17:45:49.434477 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.434505 kubelet[3184]: W0912 17:45:49.434490 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.434505 kubelet[3184]: E0912 17:45:49.434498 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:49.434777 kubelet[3184]: E0912 17:45:49.434750 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.434777 kubelet[3184]: W0912 17:45:49.434766 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.434777 kubelet[3184]: E0912 17:45:49.434777 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:49.436221 kubelet[3184]: E0912 17:45:49.436189 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.436221 kubelet[3184]: W0912 17:45:49.436213 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.436221 kubelet[3184]: E0912 17:45:49.436225 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:49.443612 kubelet[3184]: E0912 17:45:49.443586 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.443612 kubelet[3184]: W0912 17:45:49.443607 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.443830 kubelet[3184]: E0912 17:45:49.443621 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:49.443936 kubelet[3184]: E0912 17:45:49.443918 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.443936 kubelet[3184]: W0912 17:45:49.443934 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.444004 kubelet[3184]: E0912 17:45:49.443947 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:49.444199 kubelet[3184]: E0912 17:45:49.444181 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.444199 kubelet[3184]: W0912 17:45:49.444197 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.444282 kubelet[3184]: E0912 17:45:49.444209 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:49.444490 kubelet[3184]: E0912 17:45:49.444472 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.444490 kubelet[3184]: W0912 17:45:49.444488 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.444661 kubelet[3184]: E0912 17:45:49.444498 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:49.444780 kubelet[3184]: E0912 17:45:49.444763 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.444780 kubelet[3184]: W0912 17:45:49.444778 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.444949 kubelet[3184]: E0912 17:45:49.444790 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:49.446343 kubelet[3184]: E0912 17:45:49.446321 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.446343 kubelet[3184]: W0912 17:45:49.446342 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.446539 kubelet[3184]: E0912 17:45:49.446354 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:49.446660 kubelet[3184]: E0912 17:45:49.446641 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.446660 kubelet[3184]: W0912 17:45:49.446657 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.446843 kubelet[3184]: E0912 17:45:49.446668 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:49.446958 kubelet[3184]: E0912 17:45:49.446939 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.446958 kubelet[3184]: W0912 17:45:49.446955 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.447020 kubelet[3184]: E0912 17:45:49.446967 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:49.447217 kubelet[3184]: E0912 17:45:49.447199 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.447217 kubelet[3184]: W0912 17:45:49.447215 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.447319 kubelet[3184]: E0912 17:45:49.447226 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:49.447612 kubelet[3184]: E0912 17:45:49.447592 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.447612 kubelet[3184]: W0912 17:45:49.447610 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.447702 kubelet[3184]: E0912 17:45:49.447622 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:49.447863 kubelet[3184]: E0912 17:45:49.447845 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.447863 kubelet[3184]: W0912 17:45:49.447860 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.448029 kubelet[3184]: E0912 17:45:49.447871 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:49.448139 kubelet[3184]: E0912 17:45:49.448120 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.448139 kubelet[3184]: W0912 17:45:49.448137 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.448339 kubelet[3184]: E0912 17:45:49.448147 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:49.448455 kubelet[3184]: E0912 17:45:49.448437 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.448455 kubelet[3184]: W0912 17:45:49.448454 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.448626 kubelet[3184]: E0912 17:45:49.448466 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:49.449063 kubelet[3184]: E0912 17:45:49.449041 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.449150 kubelet[3184]: W0912 17:45:49.449129 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.449982 kubelet[3184]: E0912 17:45:49.449881 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:49.450274 kubelet[3184]: E0912 17:45:49.450253 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.450274 kubelet[3184]: W0912 17:45:49.450271 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.450721 kubelet[3184]: E0912 17:45:49.450283 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:49.450721 kubelet[3184]: E0912 17:45:49.450470 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.450721 kubelet[3184]: W0912 17:45:49.450478 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.450721 kubelet[3184]: E0912 17:45:49.450487 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:49.451335 kubelet[3184]: E0912 17:45:49.451308 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.451335 kubelet[3184]: W0912 17:45:49.451326 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.451335 kubelet[3184]: E0912 17:45:49.451338 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:49.452329 kubelet[3184]: E0912 17:45:49.452305 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:49.452329 kubelet[3184]: W0912 17:45:49.452324 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:49.452440 kubelet[3184]: E0912 17:45:49.452336 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.266206 kubelet[3184]: E0912 17:45:50.265548 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jjztm" podUID="64c67752-89ee-4f26-b63e-b37e41be4790" Sep 12 17:45:50.372830 kubelet[3184]: I0912 17:45:50.372801 3184 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:45:50.443839 kubelet[3184]: E0912 17:45:50.443811 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.444288 kubelet[3184]: W0912 17:45:50.444145 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.444288 kubelet[3184]: E0912 17:45:50.444174 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.444549 kubelet[3184]: E0912 17:45:50.444441 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.444549 kubelet[3184]: W0912 17:45:50.444456 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.444549 kubelet[3184]: E0912 17:45:50.444466 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.444706 kubelet[3184]: E0912 17:45:50.444694 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.444757 kubelet[3184]: W0912 17:45:50.444747 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.444888 kubelet[3184]: E0912 17:45:50.444804 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.444986 kubelet[3184]: E0912 17:45:50.444974 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.445048 kubelet[3184]: W0912 17:45:50.445036 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.445103 kubelet[3184]: E0912 17:45:50.445093 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.445404 kubelet[3184]: E0912 17:45:50.445312 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.445404 kubelet[3184]: W0912 17:45:50.445323 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.445404 kubelet[3184]: E0912 17:45:50.445333 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.445563 kubelet[3184]: E0912 17:45:50.445551 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.445622 kubelet[3184]: W0912 17:45:50.445611 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.445751 kubelet[3184]: E0912 17:45:50.445668 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.445840 kubelet[3184]: E0912 17:45:50.445830 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.445891 kubelet[3184]: W0912 17:45:50.445880 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.445945 kubelet[3184]: E0912 17:45:50.445935 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.446264 kubelet[3184]: E0912 17:45:50.446216 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.446264 kubelet[3184]: W0912 17:45:50.446229 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.446468 kubelet[3184]: E0912 17:45:50.446364 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.446574 kubelet[3184]: E0912 17:45:50.446562 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.446632 kubelet[3184]: W0912 17:45:50.446622 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.446757 kubelet[3184]: E0912 17:45:50.446676 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.446851 kubelet[3184]: E0912 17:45:50.446840 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.446914 kubelet[3184]: W0912 17:45:50.446902 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.446962 kubelet[3184]: E0912 17:45:50.446953 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.447147 kubelet[3184]: E0912 17:45:50.447135 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.447319 kubelet[3184]: W0912 17:45:50.447204 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.447319 kubelet[3184]: E0912 17:45:50.447219 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.447453 kubelet[3184]: E0912 17:45:50.447440 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.447512 kubelet[3184]: W0912 17:45:50.447501 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.447651 kubelet[3184]: E0912 17:45:50.447559 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.447746 kubelet[3184]: E0912 17:45:50.447735 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.447795 kubelet[3184]: W0912 17:45:50.447785 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.447851 kubelet[3184]: E0912 17:45:50.447841 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.448045 kubelet[3184]: E0912 17:45:50.448033 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.448202 kubelet[3184]: W0912 17:45:50.448106 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.448202 kubelet[3184]: E0912 17:45:50.448121 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.448337 kubelet[3184]: E0912 17:45:50.448325 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.448399 kubelet[3184]: W0912 17:45:50.448388 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.448455 kubelet[3184]: E0912 17:45:50.448443 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.451622 kubelet[3184]: E0912 17:45:50.451602 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.451622 kubelet[3184]: W0912 17:45:50.451619 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.451737 kubelet[3184]: E0912 17:45:50.451631 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.451815 kubelet[3184]: E0912 17:45:50.451797 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.451815 kubelet[3184]: W0912 17:45:50.451810 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.451986 kubelet[3184]: E0912 17:45:50.451819 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.452071 kubelet[3184]: E0912 17:45:50.452059 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.452194 kubelet[3184]: W0912 17:45:50.452121 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.452194 kubelet[3184]: E0912 17:45:50.452137 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.452514 kubelet[3184]: E0912 17:45:50.452442 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.452514 kubelet[3184]: W0912 17:45:50.452456 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.452514 kubelet[3184]: E0912 17:45:50.452469 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.452874 kubelet[3184]: E0912 17:45:50.452754 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.452874 kubelet[3184]: W0912 17:45:50.452767 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.452874 kubelet[3184]: E0912 17:45:50.452778 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.453095 kubelet[3184]: E0912 17:45:50.453029 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.453095 kubelet[3184]: W0912 17:45:50.453041 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.453095 kubelet[3184]: E0912 17:45:50.453052 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.453462 kubelet[3184]: E0912 17:45:50.453358 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.453462 kubelet[3184]: W0912 17:45:50.453371 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.453462 kubelet[3184]: E0912 17:45:50.453383 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.453855 kubelet[3184]: E0912 17:45:50.453746 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.453855 kubelet[3184]: W0912 17:45:50.453759 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.453855 kubelet[3184]: E0912 17:45:50.453770 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.454075 kubelet[3184]: E0912 17:45:50.453987 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.454075 kubelet[3184]: W0912 17:45:50.453999 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.454075 kubelet[3184]: E0912 17:45:50.454009 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.454391 kubelet[3184]: E0912 17:45:50.454317 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.454391 kubelet[3184]: W0912 17:45:50.454333 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.454391 kubelet[3184]: E0912 17:45:50.454344 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.454814 kubelet[3184]: E0912 17:45:50.454688 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.454814 kubelet[3184]: W0912 17:45:50.454701 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.454814 kubelet[3184]: E0912 17:45:50.454713 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.455166 kubelet[3184]: E0912 17:45:50.455066 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.455166 kubelet[3184]: W0912 17:45:50.455080 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.455166 kubelet[3184]: E0912 17:45:50.455091 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.455454 kubelet[3184]: E0912 17:45:50.455349 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.455454 kubelet[3184]: W0912 17:45:50.455361 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.455454 kubelet[3184]: E0912 17:45:50.455371 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.455739 kubelet[3184]: E0912 17:45:50.455636 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.455739 kubelet[3184]: W0912 17:45:50.455649 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.455739 kubelet[3184]: E0912 17:45:50.455660 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.456120 kubelet[3184]: E0912 17:45:50.455951 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.456120 kubelet[3184]: W0912 17:45:50.455964 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.456120 kubelet[3184]: E0912 17:45:50.455974 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.456251 kubelet[3184]: E0912 17:45:50.456220 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.456281 kubelet[3184]: W0912 17:45:50.456250 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.456281 kubelet[3184]: E0912 17:45:50.456263 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.456624 kubelet[3184]: E0912 17:45:50.456521 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.456624 kubelet[3184]: W0912 17:45:50.456534 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.456624 kubelet[3184]: E0912 17:45:50.456545 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.456914 kubelet[3184]: E0912 17:45:50.456868 3184 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:45:50.456914 kubelet[3184]: W0912 17:45:50.456881 3184 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:45:50.456914 kubelet[3184]: E0912 17:45:50.456894 3184 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:45:50.807039 containerd[1714]: time="2025-09-12T17:45:50.806985510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:45:50.810169 containerd[1714]: time="2025-09-12T17:45:50.810016472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 12 17:45:50.813655 containerd[1714]: time="2025-09-12T17:45:50.813442515Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:45:50.818139 containerd[1714]: time="2025-09-12T17:45:50.818108999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:45:50.818863 containerd[1714]: time="2025-09-12T17:45:50.818705720Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.894227778s" Sep 12 17:45:50.818863 containerd[1714]: time="2025-09-12T17:45:50.818741520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 12 17:45:50.827652 containerd[1714]: time="2025-09-12T17:45:50.827484687Z" level=info msg="CreateContainer within sandbox \"51c99a352205b6bd2df25c83e03eccc168f1cb69836a5f89b6c92adb50fdb863\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:45:50.852044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount616996025.mount: Deactivated successfully. Sep 12 17:45:50.865940 containerd[1714]: time="2025-09-12T17:45:50.865827599Z" level=info msg="CreateContainer within sandbox \"51c99a352205b6bd2df25c83e03eccc168f1cb69836a5f89b6c92adb50fdb863\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e379bdac8e4468f7d850645a4e2cbaebdd77c47bcb0df63cd57628698b46ace8\"" Sep 12 17:45:50.867434 containerd[1714]: time="2025-09-12T17:45:50.866276479Z" level=info msg="StartContainer for \"e379bdac8e4468f7d850645a4e2cbaebdd77c47bcb0df63cd57628698b46ace8\"" Sep 12 17:45:50.895424 systemd[1]: Started cri-containerd-e379bdac8e4468f7d850645a4e2cbaebdd77c47bcb0df63cd57628698b46ace8.scope - libcontainer container e379bdac8e4468f7d850645a4e2cbaebdd77c47bcb0df63cd57628698b46ace8. Sep 12 17:45:50.924631 containerd[1714]: time="2025-09-12T17:45:50.924590728Z" level=info msg="StartContainer for \"e379bdac8e4468f7d850645a4e2cbaebdd77c47bcb0df63cd57628698b46ace8\" returns successfully" Sep 12 17:45:50.938417 systemd[1]: cri-containerd-e379bdac8e4468f7d850645a4e2cbaebdd77c47bcb0df63cd57628698b46ace8.scope: Deactivated successfully. Sep 12 17:45:50.963719 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e379bdac8e4468f7d850645a4e2cbaebdd77c47bcb0df63cd57628698b46ace8-rootfs.mount: Deactivated successfully. Sep 12 17:45:52.039695 containerd[1714]: time="2025-09-12T17:45:52.039560576Z" level=info msg="shim disconnected" id=e379bdac8e4468f7d850645a4e2cbaebdd77c47bcb0df63cd57628698b46ace8 namespace=k8s.io Sep 12 17:45:52.039695 containerd[1714]: time="2025-09-12T17:45:52.039633616Z" level=warning msg="cleaning up after shim disconnected" id=e379bdac8e4468f7d850645a4e2cbaebdd77c47bcb0df63cd57628698b46ace8 namespace=k8s.io Sep 12 17:45:52.039695 containerd[1714]: time="2025-09-12T17:45:52.039643496Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:45:52.049254 containerd[1714]: time="2025-09-12T17:45:52.049090024Z" level=warning msg="cleanup warnings time=\"2025-09-12T17:45:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 17:45:52.265423 kubelet[3184]: E0912 17:45:52.264979 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jjztm" podUID="64c67752-89ee-4f26-b63e-b37e41be4790" Sep 12 17:45:52.382094 containerd[1714]: time="2025-09-12T17:45:52.381981901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:45:54.266549 kubelet[3184]: E0912 17:45:54.265461 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jjztm" podUID="64c67752-89ee-4f26-b63e-b37e41be4790" Sep 12 17:45:55.810175 containerd[1714]: time="2025-09-12T17:45:55.809433396Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:45:55.812594 containerd[1714]: time="2025-09-12T17:45:55.812566398Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 12 17:45:55.816423 containerd[1714]: time="2025-09-12T17:45:55.816397722Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:45:55.822146 containerd[1714]: time="2025-09-12T17:45:55.822100407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:45:55.824227 containerd[1714]: time="2025-09-12T17:45:55.824187289Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 3.442047588s" Sep 12 17:45:55.824227 containerd[1714]: time="2025-09-12T17:45:55.824221609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 12 17:45:55.832180 containerd[1714]: time="2025-09-12T17:45:55.832146016Z" level=info msg="CreateContainer within sandbox \"51c99a352205b6bd2df25c83e03eccc168f1cb69836a5f89b6c92adb50fdb863\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:45:55.873426 containerd[1714]: time="2025-09-12T17:45:55.873380692Z" level=info msg="CreateContainer within sandbox \"51c99a352205b6bd2df25c83e03eccc168f1cb69836a5f89b6c92adb50fdb863\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0d41333038826c5bcc976bdb2b5e4892e81e4b742c51c5a427926345c6dc9aa4\"" Sep 12 17:45:55.874344 containerd[1714]: time="2025-09-12T17:45:55.874275773Z" level=info msg="StartContainer for \"0d41333038826c5bcc976bdb2b5e4892e81e4b742c51c5a427926345c6dc9aa4\"" Sep 12 17:45:55.908401 systemd[1]: Started cri-containerd-0d41333038826c5bcc976bdb2b5e4892e81e4b742c51c5a427926345c6dc9aa4.scope - libcontainer container 0d41333038826c5bcc976bdb2b5e4892e81e4b742c51c5a427926345c6dc9aa4. Sep 12 17:45:55.941705 containerd[1714]: time="2025-09-12T17:45:55.941618353Z" level=info msg="StartContainer for \"0d41333038826c5bcc976bdb2b5e4892e81e4b742c51c5a427926345c6dc9aa4\" returns successfully" Sep 12 17:45:56.266573 kubelet[3184]: E0912 17:45:56.266456 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jjztm" podUID="64c67752-89ee-4f26-b63e-b37e41be4790" Sep 12 17:45:57.372020 systemd[1]: cri-containerd-0d41333038826c5bcc976bdb2b5e4892e81e4b742c51c5a427926345c6dc9aa4.scope: Deactivated successfully. Sep 12 17:45:57.393138 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d41333038826c5bcc976bdb2b5e4892e81e4b742c51c5a427926345c6dc9aa4-rootfs.mount: Deactivated successfully. Sep 12 17:45:57.398271 kubelet[3184]: I0912 17:45:57.397398 3184 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 17:45:58.025964 systemd[1]: Created slice kubepods-burstable-podfcbac6f2_0b6a_4a25_b5bf_41bd3c3b4f83.slice - libcontainer container kubepods-burstable-podfcbac6f2_0b6a_4a25_b5bf_41bd3c3b4f83.slice. Sep 12 17:45:58.035928 containerd[1714]: time="2025-09-12T17:45:58.035730008Z" level=info msg="shim disconnected" id=0d41333038826c5bcc976bdb2b5e4892e81e4b742c51c5a427926345c6dc9aa4 namespace=k8s.io Sep 12 17:45:58.035928 containerd[1714]: time="2025-09-12T17:45:58.035925168Z" level=warning msg="cleaning up after shim disconnected" id=0d41333038826c5bcc976bdb2b5e4892e81e4b742c51c5a427926345c6dc9aa4 namespace=k8s.io Sep 12 17:45:58.036317 containerd[1714]: time="2025-09-12T17:45:58.036141888Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:45:58.048258 systemd[1]: Created slice kubepods-burstable-podacb602f3_ad8a_4d55_aba0_1c4c66a93bb2.slice - libcontainer container kubepods-burstable-podacb602f3_ad8a_4d55_aba0_1c4c66a93bb2.slice. Sep 12 17:45:58.058181 systemd[1]: Created slice kubepods-besteffort-podcb0990fd_3735_45b3_86b2_04ad783e7243.slice - libcontainer container kubepods-besteffort-podcb0990fd_3735_45b3_86b2_04ad783e7243.slice. Sep 12 17:45:58.060564 containerd[1714]: time="2025-09-12T17:45:58.060453430Z" level=warning msg="cleanup warnings time=\"2025-09-12T17:45:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 17:45:58.073293 systemd[1]: Created slice kubepods-besteffort-podfaa47950_5ad9_4518_9026_0ce0997471e9.slice - libcontainer container kubepods-besteffort-podfaa47950_5ad9_4518_9026_0ce0997471e9.slice. Sep 12 17:45:58.086800 systemd[1]: Created slice kubepods-besteffort-pod0dd061b5_685d_4d98_8a24_ddf59d9fadda.slice - libcontainer container kubepods-besteffort-pod0dd061b5_685d_4d98_8a24_ddf59d9fadda.slice. Sep 12 17:45:58.093740 systemd[1]: Created slice kubepods-besteffort-pode27e3cc8_0ec3_48e6_8bfe_e729aef4366f.slice - libcontainer container kubepods-besteffort-pode27e3cc8_0ec3_48e6_8bfe_e729aef4366f.slice. Sep 12 17:45:58.099540 systemd[1]: Created slice kubepods-besteffort-podbbd20681_1f6f_410b_806a_7fe01c4c3226.slice - libcontainer container kubepods-besteffort-podbbd20681_1f6f_410b_806a_7fe01c4c3226.slice. Sep 12 17:45:58.107276 kubelet[3184]: I0912 17:45:58.107027 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bbd20681-1f6f-410b-806a-7fe01c4c3226-calico-apiserver-certs\") pod \"calico-apiserver-868c8dbd57-s6vg4\" (UID: \"bbd20681-1f6f-410b-806a-7fe01c4c3226\") " pod="calico-apiserver/calico-apiserver-868c8dbd57-s6vg4" Sep 12 17:45:58.107276 kubelet[3184]: I0912 17:45:58.107079 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lqvh\" (UniqueName: \"kubernetes.io/projected/fcbac6f2-0b6a-4a25-b5bf-41bd3c3b4f83-kube-api-access-7lqvh\") pod \"coredns-674b8bbfcf-kbs2p\" (UID: \"fcbac6f2-0b6a-4a25-b5bf-41bd3c3b4f83\") " pod="kube-system/coredns-674b8bbfcf-kbs2p" Sep 12 17:45:58.107276 kubelet[3184]: I0912 17:45:58.107111 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4h5z\" (UniqueName: \"kubernetes.io/projected/bbd20681-1f6f-410b-806a-7fe01c4c3226-kube-api-access-c4h5z\") pod \"calico-apiserver-868c8dbd57-s6vg4\" (UID: \"bbd20681-1f6f-410b-806a-7fe01c4c3226\") " pod="calico-apiserver/calico-apiserver-868c8dbd57-s6vg4" Sep 12 17:45:58.107276 kubelet[3184]: I0912 17:45:58.107131 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e27e3cc8-0ec3-48e6-8bfe-e729aef4366f-config\") pod \"goldmane-54d579b49d-slnk5\" (UID: \"e27e3cc8-0ec3-48e6-8bfe-e729aef4366f\") " pod="calico-system/goldmane-54d579b49d-slnk5" Sep 12 17:45:58.107276 kubelet[3184]: I0912 17:45:58.107151 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e27e3cc8-0ec3-48e6-8bfe-e729aef4366f-goldmane-key-pair\") pod \"goldmane-54d579b49d-slnk5\" (UID: \"e27e3cc8-0ec3-48e6-8bfe-e729aef4366f\") " pod="calico-system/goldmane-54d579b49d-slnk5" Sep 12 17:45:58.107562 kubelet[3184]: I0912 17:45:58.107175 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cfnt\" (UniqueName: \"kubernetes.io/projected/0dd061b5-685d-4d98-8a24-ddf59d9fadda-kube-api-access-4cfnt\") pod \"whisker-64cb98bb48-57t2s\" (UID: \"0dd061b5-685d-4d98-8a24-ddf59d9fadda\") " pod="calico-system/whisker-64cb98bb48-57t2s" Sep 12 17:45:58.107562 kubelet[3184]: I0912 17:45:58.107198 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n946x\" (UniqueName: \"kubernetes.io/projected/faa47950-5ad9-4518-9026-0ce0997471e9-kube-api-access-n946x\") pod \"calico-apiserver-868c8dbd57-5bhwh\" (UID: \"faa47950-5ad9-4518-9026-0ce0997471e9\") " pod="calico-apiserver/calico-apiserver-868c8dbd57-5bhwh" Sep 12 17:45:58.107562 kubelet[3184]: I0912 17:45:58.107218 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb0990fd-3735-45b3-86b2-04ad783e7243-tigera-ca-bundle\") pod \"calico-kube-controllers-84df6476b7-wbsmk\" (UID: \"cb0990fd-3735-45b3-86b2-04ad783e7243\") " pod="calico-system/calico-kube-controllers-84df6476b7-wbsmk" Sep 12 17:45:58.107562 kubelet[3184]: I0912 17:45:58.107278 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxbmz\" (UniqueName: \"kubernetes.io/projected/cb0990fd-3735-45b3-86b2-04ad783e7243-kube-api-access-xxbmz\") pod \"calico-kube-controllers-84df6476b7-wbsmk\" (UID: \"cb0990fd-3735-45b3-86b2-04ad783e7243\") " pod="calico-system/calico-kube-controllers-84df6476b7-wbsmk" Sep 12 17:45:58.107562 kubelet[3184]: I0912 17:45:58.107303 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0dd061b5-685d-4d98-8a24-ddf59d9fadda-whisker-ca-bundle\") pod \"whisker-64cb98bb48-57t2s\" (UID: \"0dd061b5-685d-4d98-8a24-ddf59d9fadda\") " pod="calico-system/whisker-64cb98bb48-57t2s" Sep 12 17:45:58.107673 kubelet[3184]: I0912 17:45:58.107337 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/faa47950-5ad9-4518-9026-0ce0997471e9-calico-apiserver-certs\") pod \"calico-apiserver-868c8dbd57-5bhwh\" (UID: \"faa47950-5ad9-4518-9026-0ce0997471e9\") " pod="calico-apiserver/calico-apiserver-868c8dbd57-5bhwh" Sep 12 17:45:58.107673 kubelet[3184]: I0912 17:45:58.107358 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e27e3cc8-0ec3-48e6-8bfe-e729aef4366f-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-slnk5\" (UID: \"e27e3cc8-0ec3-48e6-8bfe-e729aef4366f\") " pod="calico-system/goldmane-54d579b49d-slnk5" Sep 12 17:45:58.107673 kubelet[3184]: I0912 17:45:58.107379 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fcbac6f2-0b6a-4a25-b5bf-41bd3c3b4f83-config-volume\") pod \"coredns-674b8bbfcf-kbs2p\" (UID: \"fcbac6f2-0b6a-4a25-b5bf-41bd3c3b4f83\") " pod="kube-system/coredns-674b8bbfcf-kbs2p" Sep 12 17:45:58.107673 kubelet[3184]: I0912 17:45:58.107397 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0dd061b5-685d-4d98-8a24-ddf59d9fadda-whisker-backend-key-pair\") pod \"whisker-64cb98bb48-57t2s\" (UID: \"0dd061b5-685d-4d98-8a24-ddf59d9fadda\") " pod="calico-system/whisker-64cb98bb48-57t2s" Sep 12 17:45:58.107673 kubelet[3184]: I0912 17:45:58.107420 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/acb602f3-ad8a-4d55-aba0-1c4c66a93bb2-config-volume\") pod \"coredns-674b8bbfcf-x6pz9\" (UID: \"acb602f3-ad8a-4d55-aba0-1c4c66a93bb2\") " pod="kube-system/coredns-674b8bbfcf-x6pz9" Sep 12 17:45:58.107779 kubelet[3184]: I0912 17:45:58.107440 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl94w\" (UniqueName: \"kubernetes.io/projected/acb602f3-ad8a-4d55-aba0-1c4c66a93bb2-kube-api-access-pl94w\") pod \"coredns-674b8bbfcf-x6pz9\" (UID: \"acb602f3-ad8a-4d55-aba0-1c4c66a93bb2\") " pod="kube-system/coredns-674b8bbfcf-x6pz9" Sep 12 17:45:58.107779 kubelet[3184]: I0912 17:45:58.107461 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56jqp\" (UniqueName: \"kubernetes.io/projected/e27e3cc8-0ec3-48e6-8bfe-e729aef4366f-kube-api-access-56jqp\") pod \"goldmane-54d579b49d-slnk5\" (UID: \"e27e3cc8-0ec3-48e6-8bfe-e729aef4366f\") " pod="calico-system/goldmane-54d579b49d-slnk5" Sep 12 17:45:58.272215 systemd[1]: Created slice kubepods-besteffort-pod64c67752_89ee_4f26_b63e_b37e41be4790.slice - libcontainer container kubepods-besteffort-pod64c67752_89ee_4f26_b63e_b37e41be4790.slice. Sep 12 17:45:58.275141 containerd[1714]: time="2025-09-12T17:45:58.275101940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jjztm,Uid:64c67752-89ee-4f26-b63e-b37e41be4790,Namespace:calico-system,Attempt:0,}" Sep 12 17:45:58.332271 containerd[1714]: time="2025-09-12T17:45:58.332003910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kbs2p,Uid:fcbac6f2-0b6a-4a25-b5bf-41bd3c3b4f83,Namespace:kube-system,Attempt:0,}" Sep 12 17:45:58.357584 containerd[1714]: time="2025-09-12T17:45:58.357542373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x6pz9,Uid:acb602f3-ad8a-4d55-aba0-1c4c66a93bb2,Namespace:kube-system,Attempt:0,}" Sep 12 17:45:58.357910 containerd[1714]: time="2025-09-12T17:45:58.357878973Z" level=error msg="Failed to destroy network for sandbox \"6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.358325 containerd[1714]: time="2025-09-12T17:45:58.358278213Z" level=error msg="encountered an error cleaning up failed sandbox \"6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.360357 containerd[1714]: time="2025-09-12T17:45:58.358329653Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jjztm,Uid:64c67752-89ee-4f26-b63e-b37e41be4790,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.360419 kubelet[3184]: E0912 17:45:58.358506 3184 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.360419 kubelet[3184]: E0912 17:45:58.358565 3184 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jjztm" Sep 12 17:45:58.360419 kubelet[3184]: E0912 17:45:58.358584 3184 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jjztm" Sep 12 17:45:58.360502 kubelet[3184]: E0912 17:45:58.358626 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jjztm_calico-system(64c67752-89ee-4f26-b63e-b37e41be4790)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jjztm_calico-system(64c67752-89ee-4f26-b63e-b37e41be4790)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jjztm" podUID="64c67752-89ee-4f26-b63e-b37e41be4790" Sep 12 17:45:58.370177 containerd[1714]: time="2025-09-12T17:45:58.370136624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84df6476b7-wbsmk,Uid:cb0990fd-3735-45b3-86b2-04ad783e7243,Namespace:calico-system,Attempt:0,}" Sep 12 17:45:58.378201 containerd[1714]: time="2025-09-12T17:45:58.378161871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-868c8dbd57-5bhwh,Uid:faa47950-5ad9-4518-9026-0ce0997471e9,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:45:58.395069 containerd[1714]: time="2025-09-12T17:45:58.395024326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64cb98bb48-57t2s,Uid:0dd061b5-685d-4d98-8a24-ddf59d9fadda,Namespace:calico-system,Attempt:0,}" Sep 12 17:45:58.399374 containerd[1714]: time="2025-09-12T17:45:58.397170408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-slnk5,Uid:e27e3cc8-0ec3-48e6-8bfe-e729aef4366f,Namespace:calico-system,Attempt:0,}" Sep 12 17:45:58.409460 containerd[1714]: time="2025-09-12T17:45:58.408785858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-868c8dbd57-s6vg4,Uid:bbd20681-1f6f-410b-806a-7fe01c4c3226,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:45:58.412490 containerd[1714]: time="2025-09-12T17:45:58.412442061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:45:58.415174 kubelet[3184]: I0912 17:45:58.415139 3184 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" Sep 12 17:45:58.423462 containerd[1714]: time="2025-09-12T17:45:58.421506829Z" level=info msg="StopPodSandbox for \"6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a\"" Sep 12 17:45:58.423462 containerd[1714]: time="2025-09-12T17:45:58.421680270Z" level=info msg="Ensure that sandbox 6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a in task-service has been cleanup successfully" Sep 12 17:45:58.466277 containerd[1714]: time="2025-09-12T17:45:58.465983149Z" level=error msg="StopPodSandbox for \"6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a\" failed" error="failed to destroy network for sandbox \"6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.467334 kubelet[3184]: E0912 17:45:58.467133 3184 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" Sep 12 17:45:58.467334 kubelet[3184]: E0912 17:45:58.467206 3184 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a"} Sep 12 17:45:58.467334 kubelet[3184]: E0912 17:45:58.467279 3184 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"64c67752-89ee-4f26-b63e-b37e41be4790\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:45:58.467334 kubelet[3184]: E0912 17:45:58.467299 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"64c67752-89ee-4f26-b63e-b37e41be4790\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jjztm" podUID="64c67752-89ee-4f26-b63e-b37e41be4790" Sep 12 17:45:58.467952 containerd[1714]: time="2025-09-12T17:45:58.467909990Z" level=error msg="Failed to destroy network for sandbox \"157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.468543 containerd[1714]: time="2025-09-12T17:45:58.468342631Z" level=error msg="encountered an error cleaning up failed sandbox \"157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.468543 containerd[1714]: time="2025-09-12T17:45:58.468453031Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kbs2p,Uid:fcbac6f2-0b6a-4a25-b5bf-41bd3c3b4f83,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.470697 kubelet[3184]: E0912 17:45:58.468651 3184 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.470697 kubelet[3184]: E0912 17:45:58.468707 3184 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-kbs2p" Sep 12 17:45:58.470697 kubelet[3184]: E0912 17:45:58.468726 3184 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-kbs2p" Sep 12 17:45:58.470811 kubelet[3184]: E0912 17:45:58.468786 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-kbs2p_kube-system(fcbac6f2-0b6a-4a25-b5bf-41bd3c3b4f83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-kbs2p_kube-system(fcbac6f2-0b6a-4a25-b5bf-41bd3c3b4f83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-kbs2p" podUID="fcbac6f2-0b6a-4a25-b5bf-41bd3c3b4f83" Sep 12 17:45:58.472581 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3-shm.mount: Deactivated successfully. Sep 12 17:45:58.675030 containerd[1714]: time="2025-09-12T17:45:58.673667093Z" level=error msg="Failed to destroy network for sandbox \"e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.675030 containerd[1714]: time="2025-09-12T17:45:58.673972333Z" level=error msg="encountered an error cleaning up failed sandbox \"e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.675030 containerd[1714]: time="2025-09-12T17:45:58.674019693Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x6pz9,Uid:acb602f3-ad8a-4d55-aba0-1c4c66a93bb2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.676188 kubelet[3184]: E0912 17:45:58.674317 3184 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.676188 kubelet[3184]: E0912 17:45:58.674370 3184 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-x6pz9" Sep 12 17:45:58.676188 kubelet[3184]: E0912 17:45:58.674394 3184 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-x6pz9" Sep 12 17:45:58.676792 kubelet[3184]: E0912 17:45:58.674435 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-x6pz9_kube-system(acb602f3-ad8a-4d55-aba0-1c4c66a93bb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-x6pz9_kube-system(acb602f3-ad8a-4d55-aba0-1c4c66a93bb2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-x6pz9" podUID="acb602f3-ad8a-4d55-aba0-1c4c66a93bb2" Sep 12 17:45:58.695756 containerd[1714]: time="2025-09-12T17:45:58.695684712Z" level=error msg="Failed to destroy network for sandbox \"ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.697340 containerd[1714]: time="2025-09-12T17:45:58.696965913Z" level=error msg="encountered an error cleaning up failed sandbox \"ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.697648 containerd[1714]: time="2025-09-12T17:45:58.697306874Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84df6476b7-wbsmk,Uid:cb0990fd-3735-45b3-86b2-04ad783e7243,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.698170 kubelet[3184]: E0912 17:45:58.697864 3184 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.698170 kubelet[3184]: E0912 17:45:58.697946 3184 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84df6476b7-wbsmk" Sep 12 17:45:58.698490 kubelet[3184]: E0912 17:45:58.698223 3184 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84df6476b7-wbsmk" Sep 12 17:45:58.698681 kubelet[3184]: E0912 17:45:58.698588 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84df6476b7-wbsmk_calico-system(cb0990fd-3735-45b3-86b2-04ad783e7243)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84df6476b7-wbsmk_calico-system(cb0990fd-3735-45b3-86b2-04ad783e7243)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84df6476b7-wbsmk" podUID="cb0990fd-3735-45b3-86b2-04ad783e7243" Sep 12 17:45:58.728462 containerd[1714]: time="2025-09-12T17:45:58.728402581Z" level=error msg="Failed to destroy network for sandbox \"4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.728871 containerd[1714]: time="2025-09-12T17:45:58.728766582Z" level=error msg="encountered an error cleaning up failed sandbox \"4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.728871 containerd[1714]: time="2025-09-12T17:45:58.728821822Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-868c8dbd57-5bhwh,Uid:faa47950-5ad9-4518-9026-0ce0997471e9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.729445 kubelet[3184]: E0912 17:45:58.729036 3184 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.729445 kubelet[3184]: E0912 17:45:58.729101 3184 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-868c8dbd57-5bhwh" Sep 12 17:45:58.729445 kubelet[3184]: E0912 17:45:58.729119 3184 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-868c8dbd57-5bhwh" Sep 12 17:45:58.729606 kubelet[3184]: E0912 17:45:58.729160 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-868c8dbd57-5bhwh_calico-apiserver(faa47950-5ad9-4518-9026-0ce0997471e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-868c8dbd57-5bhwh_calico-apiserver(faa47950-5ad9-4518-9026-0ce0997471e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-868c8dbd57-5bhwh" podUID="faa47950-5ad9-4518-9026-0ce0997471e9" Sep 12 17:45:58.743058 containerd[1714]: time="2025-09-12T17:45:58.741320833Z" level=error msg="Failed to destroy network for sandbox \"173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.743058 containerd[1714]: time="2025-09-12T17:45:58.742111153Z" level=error msg="encountered an error cleaning up failed sandbox \"173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.743058 containerd[1714]: time="2025-09-12T17:45:58.742281434Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-868c8dbd57-s6vg4,Uid:bbd20681-1f6f-410b-806a-7fe01c4c3226,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.743472 kubelet[3184]: E0912 17:45:58.743430 3184 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.743590 kubelet[3184]: E0912 17:45:58.743575 3184 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-868c8dbd57-s6vg4" Sep 12 17:45:58.743686 kubelet[3184]: E0912 17:45:58.743670 3184 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-868c8dbd57-s6vg4" Sep 12 17:45:58.743815 kubelet[3184]: E0912 17:45:58.743783 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-868c8dbd57-s6vg4_calico-apiserver(bbd20681-1f6f-410b-806a-7fe01c4c3226)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-868c8dbd57-s6vg4_calico-apiserver(bbd20681-1f6f-410b-806a-7fe01c4c3226)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-868c8dbd57-s6vg4" podUID="bbd20681-1f6f-410b-806a-7fe01c4c3226" Sep 12 17:45:58.750989 containerd[1714]: time="2025-09-12T17:45:58.750927281Z" level=error msg="Failed to destroy network for sandbox \"4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.751697 containerd[1714]: time="2025-09-12T17:45:58.751664042Z" level=error msg="encountered an error cleaning up failed sandbox \"4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.751835 containerd[1714]: time="2025-09-12T17:45:58.751809362Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64cb98bb48-57t2s,Uid:0dd061b5-685d-4d98-8a24-ddf59d9fadda,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.752147 kubelet[3184]: E0912 17:45:58.752069 3184 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.752428 kubelet[3184]: E0912 17:45:58.752400 3184 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64cb98bb48-57t2s" Sep 12 17:45:58.752503 kubelet[3184]: E0912 17:45:58.752435 3184 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64cb98bb48-57t2s" Sep 12 17:45:58.752536 kubelet[3184]: E0912 17:45:58.752488 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-64cb98bb48-57t2s_calico-system(0dd061b5-685d-4d98-8a24-ddf59d9fadda)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-64cb98bb48-57t2s_calico-system(0dd061b5-685d-4d98-8a24-ddf59d9fadda)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-64cb98bb48-57t2s" podUID="0dd061b5-685d-4d98-8a24-ddf59d9fadda" Sep 12 17:45:58.753440 containerd[1714]: time="2025-09-12T17:45:58.753370603Z" level=error msg="Failed to destroy network for sandbox \"a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.753963 containerd[1714]: time="2025-09-12T17:45:58.753935244Z" level=error msg="encountered an error cleaning up failed sandbox \"a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.754152 containerd[1714]: time="2025-09-12T17:45:58.754096404Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-slnk5,Uid:e27e3cc8-0ec3-48e6-8bfe-e729aef4366f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.754478 kubelet[3184]: E0912 17:45:58.754438 3184 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:58.754591 kubelet[3184]: E0912 17:45:58.754483 3184 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-slnk5" Sep 12 17:45:58.754591 kubelet[3184]: E0912 17:45:58.754502 3184 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-slnk5" Sep 12 17:45:58.754591 kubelet[3184]: E0912 17:45:58.754552 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-slnk5_calico-system(e27e3cc8-0ec3-48e6-8bfe-e729aef4366f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-slnk5_calico-system(e27e3cc8-0ec3-48e6-8bfe-e729aef4366f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-slnk5" podUID="e27e3cc8-0ec3-48e6-8bfe-e729aef4366f" Sep 12 17:45:59.394850 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55-shm.mount: Deactivated successfully. Sep 12 17:45:59.395146 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa-shm.mount: Deactivated successfully. Sep 12 17:45:59.417693 kubelet[3184]: I0912 17:45:59.417656 3184 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" Sep 12 17:45:59.418974 containerd[1714]: time="2025-09-12T17:45:59.418939873Z" level=info msg="StopPodSandbox for \"a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90\"" Sep 12 17:45:59.421574 kubelet[3184]: I0912 17:45:59.419498 3184 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" Sep 12 17:45:59.421663 containerd[1714]: time="2025-09-12T17:45:59.419933234Z" level=info msg="StopPodSandbox for \"157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3\"" Sep 12 17:45:59.421663 containerd[1714]: time="2025-09-12T17:45:59.420203274Z" level=info msg="Ensure that sandbox 157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3 in task-service has been cleanup successfully" Sep 12 17:45:59.421663 containerd[1714]: time="2025-09-12T17:45:59.420299754Z" level=info msg="Ensure that sandbox a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90 in task-service has been cleanup successfully" Sep 12 17:45:59.422816 kubelet[3184]: I0912 17:45:59.422792 3184 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" Sep 12 17:45:59.423846 containerd[1714]: time="2025-09-12T17:45:59.423758677Z" level=info msg="StopPodSandbox for \"173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e\"" Sep 12 17:45:59.424081 containerd[1714]: time="2025-09-12T17:45:59.423921917Z" level=info msg="Ensure that sandbox 173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e in task-service has been cleanup successfully" Sep 12 17:45:59.425829 kubelet[3184]: I0912 17:45:59.425480 3184 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" Sep 12 17:45:59.426773 containerd[1714]: time="2025-09-12T17:45:59.426728200Z" level=info msg="StopPodSandbox for \"4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2\"" Sep 12 17:45:59.429104 containerd[1714]: time="2025-09-12T17:45:59.426894040Z" level=info msg="Ensure that sandbox 4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2 in task-service has been cleanup successfully" Sep 12 17:45:59.433140 kubelet[3184]: I0912 17:45:59.433098 3184 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" Sep 12 17:45:59.434705 containerd[1714]: time="2025-09-12T17:45:59.434661287Z" level=info msg="StopPodSandbox for \"ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55\"" Sep 12 17:45:59.434880 containerd[1714]: time="2025-09-12T17:45:59.434822367Z" level=info msg="Ensure that sandbox ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55 in task-service has been cleanup successfully" Sep 12 17:45:59.439331 kubelet[3184]: I0912 17:45:59.438440 3184 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" Sep 12 17:45:59.440054 containerd[1714]: time="2025-09-12T17:45:59.439905491Z" level=info msg="StopPodSandbox for \"e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa\"" Sep 12 17:45:59.442712 containerd[1714]: time="2025-09-12T17:45:59.441021772Z" level=info msg="Ensure that sandbox e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa in task-service has been cleanup successfully" Sep 12 17:45:59.450838 kubelet[3184]: I0912 17:45:59.450716 3184 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" Sep 12 17:45:59.453068 containerd[1714]: time="2025-09-12T17:45:59.452726903Z" level=info msg="StopPodSandbox for \"4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46\"" Sep 12 17:45:59.454750 containerd[1714]: time="2025-09-12T17:45:59.454717705Z" level=info msg="Ensure that sandbox 4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46 in task-service has been cleanup successfully" Sep 12 17:45:59.510004 containerd[1714]: time="2025-09-12T17:45:59.509886473Z" level=error msg="StopPodSandbox for \"ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55\" failed" error="failed to destroy network for sandbox \"ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:59.510423 kubelet[3184]: E0912 17:45:59.510382 3184 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" Sep 12 17:45:59.510506 kubelet[3184]: E0912 17:45:59.510435 3184 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55"} Sep 12 17:45:59.510506 kubelet[3184]: E0912 17:45:59.510474 3184 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cb0990fd-3735-45b3-86b2-04ad783e7243\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:45:59.510506 kubelet[3184]: E0912 17:45:59.510495 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cb0990fd-3735-45b3-86b2-04ad783e7243\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84df6476b7-wbsmk" podUID="cb0990fd-3735-45b3-86b2-04ad783e7243" Sep 12 17:45:59.516229 containerd[1714]: time="2025-09-12T17:45:59.515993439Z" level=error msg="StopPodSandbox for \"a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90\" failed" error="failed to destroy network for sandbox \"a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:59.516367 kubelet[3184]: E0912 17:45:59.516267 3184 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" Sep 12 17:45:59.516367 kubelet[3184]: E0912 17:45:59.516313 3184 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90"} Sep 12 17:45:59.516367 kubelet[3184]: E0912 17:45:59.516348 3184 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e27e3cc8-0ec3-48e6-8bfe-e729aef4366f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:45:59.516480 kubelet[3184]: E0912 17:45:59.516369 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e27e3cc8-0ec3-48e6-8bfe-e729aef4366f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-slnk5" podUID="e27e3cc8-0ec3-48e6-8bfe-e729aef4366f" Sep 12 17:45:59.528041 containerd[1714]: time="2025-09-12T17:45:59.527883129Z" level=error msg="StopPodSandbox for \"157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3\" failed" error="failed to destroy network for sandbox \"157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:59.528493 kubelet[3184]: E0912 17:45:59.528128 3184 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" Sep 12 17:45:59.528493 kubelet[3184]: E0912 17:45:59.528176 3184 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3"} Sep 12 17:45:59.528493 kubelet[3184]: E0912 17:45:59.528211 3184 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fcbac6f2-0b6a-4a25-b5bf-41bd3c3b4f83\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:45:59.528493 kubelet[3184]: E0912 17:45:59.528252 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fcbac6f2-0b6a-4a25-b5bf-41bd3c3b4f83\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-kbs2p" podUID="fcbac6f2-0b6a-4a25-b5bf-41bd3c3b4f83" Sep 12 17:45:59.532488 containerd[1714]: time="2025-09-12T17:45:59.532443773Z" level=error msg="StopPodSandbox for \"e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa\" failed" error="failed to destroy network for sandbox \"e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:59.532991 kubelet[3184]: E0912 17:45:59.532841 3184 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" Sep 12 17:45:59.532991 kubelet[3184]: E0912 17:45:59.532909 3184 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa"} Sep 12 17:45:59.532991 kubelet[3184]: E0912 17:45:59.532951 3184 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"acb602f3-ad8a-4d55-aba0-1c4c66a93bb2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:45:59.532991 kubelet[3184]: E0912 17:45:59.532979 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"acb602f3-ad8a-4d55-aba0-1c4c66a93bb2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-x6pz9" podUID="acb602f3-ad8a-4d55-aba0-1c4c66a93bb2" Sep 12 17:45:59.538022 containerd[1714]: time="2025-09-12T17:45:59.537882138Z" level=error msg="StopPodSandbox for \"173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e\" failed" error="failed to destroy network for sandbox \"173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:59.538280 kubelet[3184]: E0912 17:45:59.538095 3184 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" Sep 12 17:45:59.538280 kubelet[3184]: E0912 17:45:59.538144 3184 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e"} Sep 12 17:45:59.538280 kubelet[3184]: E0912 17:45:59.538174 3184 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bbd20681-1f6f-410b-806a-7fe01c4c3226\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:45:59.538280 kubelet[3184]: E0912 17:45:59.538198 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bbd20681-1f6f-410b-806a-7fe01c4c3226\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-868c8dbd57-s6vg4" podUID="bbd20681-1f6f-410b-806a-7fe01c4c3226" Sep 12 17:45:59.539362 containerd[1714]: time="2025-09-12T17:45:59.539304980Z" level=error msg="StopPodSandbox for \"4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2\" failed" error="failed to destroy network for sandbox \"4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:59.539587 kubelet[3184]: E0912 17:45:59.539464 3184 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" Sep 12 17:45:59.539650 kubelet[3184]: E0912 17:45:59.539595 3184 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2"} Sep 12 17:45:59.539650 kubelet[3184]: E0912 17:45:59.539641 3184 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0dd061b5-685d-4d98-8a24-ddf59d9fadda\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:45:59.539715 kubelet[3184]: E0912 17:45:59.539661 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0dd061b5-685d-4d98-8a24-ddf59d9fadda\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-64cb98bb48-57t2s" podUID="0dd061b5-685d-4d98-8a24-ddf59d9fadda" Sep 12 17:45:59.544949 containerd[1714]: time="2025-09-12T17:45:59.544899384Z" level=error msg="StopPodSandbox for \"4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46\" failed" error="failed to destroy network for sandbox \"4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:45:59.545169 kubelet[3184]: E0912 17:45:59.545104 3184 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" Sep 12 17:45:59.545216 kubelet[3184]: E0912 17:45:59.545178 3184 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46"} Sep 12 17:45:59.545216 kubelet[3184]: E0912 17:45:59.545204 3184 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"faa47950-5ad9-4518-9026-0ce0997471e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:45:59.545312 kubelet[3184]: E0912 17:45:59.545225 3184 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"faa47950-5ad9-4518-9026-0ce0997471e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-868c8dbd57-5bhwh" podUID="faa47950-5ad9-4518-9026-0ce0997471e9" Sep 12 17:46:05.347735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4088944558.mount: Deactivated successfully. Sep 12 17:46:05.636037 containerd[1714]: time="2025-09-12T17:46:05.635292345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:05.638532 containerd[1714]: time="2025-09-12T17:46:05.638489908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 12 17:46:05.642642 containerd[1714]: time="2025-09-12T17:46:05.642590591Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:05.648055 containerd[1714]: time="2025-09-12T17:46:05.647990676Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:05.648908 containerd[1714]: time="2025-09-12T17:46:05.648546036Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 7.236056255s" Sep 12 17:46:05.648908 containerd[1714]: time="2025-09-12T17:46:05.648582716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 12 17:46:05.678163 containerd[1714]: time="2025-09-12T17:46:05.678118341Z" level=info msg="CreateContainer within sandbox \"51c99a352205b6bd2df25c83e03eccc168f1cb69836a5f89b6c92adb50fdb863\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:46:05.736499 containerd[1714]: time="2025-09-12T17:46:05.736452749Z" level=info msg="CreateContainer within sandbox \"51c99a352205b6bd2df25c83e03eccc168f1cb69836a5f89b6c92adb50fdb863\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"531e823c573ffc01b685a97a2d0cddbeb5088b150af1e00a7923849d2a0a534a\"" Sep 12 17:46:05.738665 containerd[1714]: time="2025-09-12T17:46:05.738611511Z" level=info msg="StartContainer for \"531e823c573ffc01b685a97a2d0cddbeb5088b150af1e00a7923849d2a0a534a\"" Sep 12 17:46:05.766444 systemd[1]: Started cri-containerd-531e823c573ffc01b685a97a2d0cddbeb5088b150af1e00a7923849d2a0a534a.scope - libcontainer container 531e823c573ffc01b685a97a2d0cddbeb5088b150af1e00a7923849d2a0a534a. Sep 12 17:46:05.798727 containerd[1714]: time="2025-09-12T17:46:05.798677121Z" level=info msg="StartContainer for \"531e823c573ffc01b685a97a2d0cddbeb5088b150af1e00a7923849d2a0a534a\" returns successfully" Sep 12 17:46:06.149835 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:46:06.149976 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:46:06.300771 containerd[1714]: time="2025-09-12T17:46:06.300468737Z" level=info msg="StopPodSandbox for \"4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2\"" Sep 12 17:46:06.470432 containerd[1714]: 2025-09-12 17:46:06.418 [INFO][4424] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" Sep 12 17:46:06.470432 containerd[1714]: 2025-09-12 17:46:06.418 [INFO][4424] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" iface="eth0" netns="/var/run/netns/cni-e755e3e1-3bb0-fc5d-a679-350dc917109c" Sep 12 17:46:06.470432 containerd[1714]: 2025-09-12 17:46:06.419 [INFO][4424] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" iface="eth0" netns="/var/run/netns/cni-e755e3e1-3bb0-fc5d-a679-350dc917109c" Sep 12 17:46:06.470432 containerd[1714]: 2025-09-12 17:46:06.420 [INFO][4424] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" iface="eth0" netns="/var/run/netns/cni-e755e3e1-3bb0-fc5d-a679-350dc917109c" Sep 12 17:46:06.470432 containerd[1714]: 2025-09-12 17:46:06.420 [INFO][4424] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" Sep 12 17:46:06.470432 containerd[1714]: 2025-09-12 17:46:06.420 [INFO][4424] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" Sep 12 17:46:06.470432 containerd[1714]: 2025-09-12 17:46:06.450 [INFO][4437] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" HandleID="k8s-pod-network.4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-whisker--64cb98bb48--57t2s-eth0" Sep 12 17:46:06.470432 containerd[1714]: 2025-09-12 17:46:06.451 [INFO][4437] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:46:06.470432 containerd[1714]: 2025-09-12 17:46:06.451 [INFO][4437] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:46:06.470432 containerd[1714]: 2025-09-12 17:46:06.459 [WARNING][4437] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" HandleID="k8s-pod-network.4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-whisker--64cb98bb48--57t2s-eth0" Sep 12 17:46:06.470432 containerd[1714]: 2025-09-12 17:46:06.459 [INFO][4437] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" HandleID="k8s-pod-network.4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-whisker--64cb98bb48--57t2s-eth0" Sep 12 17:46:06.470432 containerd[1714]: 2025-09-12 17:46:06.461 [INFO][4437] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:46:06.470432 containerd[1714]: 2025-09-12 17:46:06.466 [INFO][4424] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" Sep 12 17:46:06.474276 containerd[1714]: time="2025-09-12T17:46:06.474105401Z" level=info msg="TearDown network for sandbox \"4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2\" successfully" Sep 12 17:46:06.474276 containerd[1714]: time="2025-09-12T17:46:06.474148001Z" level=info msg="StopPodSandbox for \"4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2\" returns successfully" Sep 12 17:46:06.476466 systemd[1]: run-netns-cni\x2de755e3e1\x2d3bb0\x2dfc5d\x2da679\x2d350dc917109c.mount: Deactivated successfully. Sep 12 17:46:06.561902 kubelet[3184]: I0912 17:46:06.561851 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cfnt\" (UniqueName: \"kubernetes.io/projected/0dd061b5-685d-4d98-8a24-ddf59d9fadda-kube-api-access-4cfnt\") pod \"0dd061b5-685d-4d98-8a24-ddf59d9fadda\" (UID: \"0dd061b5-685d-4d98-8a24-ddf59d9fadda\") " Sep 12 17:46:06.561902 kubelet[3184]: I0912 17:46:06.561898 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0dd061b5-685d-4d98-8a24-ddf59d9fadda-whisker-backend-key-pair\") pod \"0dd061b5-685d-4d98-8a24-ddf59d9fadda\" (UID: \"0dd061b5-685d-4d98-8a24-ddf59d9fadda\") " Sep 12 17:46:06.562376 kubelet[3184]: I0912 17:46:06.561921 3184 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0dd061b5-685d-4d98-8a24-ddf59d9fadda-whisker-ca-bundle\") pod \"0dd061b5-685d-4d98-8a24-ddf59d9fadda\" (UID: \"0dd061b5-685d-4d98-8a24-ddf59d9fadda\") " Sep 12 17:46:06.562986 kubelet[3184]: I0912 17:46:06.562921 3184 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dd061b5-685d-4d98-8a24-ddf59d9fadda-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0dd061b5-685d-4d98-8a24-ddf59d9fadda" (UID: "0dd061b5-685d-4d98-8a24-ddf59d9fadda"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:46:06.566985 systemd[1]: var-lib-kubelet-pods-0dd061b5\x2d685d\x2d4d98\x2d8a24\x2dddf59d9fadda-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4cfnt.mount: Deactivated successfully. Sep 12 17:46:06.570369 kubelet[3184]: I0912 17:46:06.570324 3184 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd061b5-685d-4d98-8a24-ddf59d9fadda-kube-api-access-4cfnt" (OuterVolumeSpecName: "kube-api-access-4cfnt") pod "0dd061b5-685d-4d98-8a24-ddf59d9fadda" (UID: "0dd061b5-685d-4d98-8a24-ddf59d9fadda"). InnerVolumeSpecName "kube-api-access-4cfnt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:46:06.572946 kubelet[3184]: I0912 17:46:06.572832 3184 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd061b5-685d-4d98-8a24-ddf59d9fadda-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0dd061b5-685d-4d98-8a24-ddf59d9fadda" (UID: "0dd061b5-685d-4d98-8a24-ddf59d9fadda"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:46:06.662395 kubelet[3184]: I0912 17:46:06.662346 3184 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0dd061b5-685d-4d98-8a24-ddf59d9fadda-whisker-backend-key-pair\") on node \"ci-4081.3.6-a-ca65cd0ccc\" DevicePath \"\"" Sep 12 17:46:06.662395 kubelet[3184]: I0912 17:46:06.662383 3184 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0dd061b5-685d-4d98-8a24-ddf59d9fadda-whisker-ca-bundle\") on node \"ci-4081.3.6-a-ca65cd0ccc\" DevicePath \"\"" Sep 12 17:46:06.662395 kubelet[3184]: I0912 17:46:06.662394 3184 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4cfnt\" (UniqueName: \"kubernetes.io/projected/0dd061b5-685d-4d98-8a24-ddf59d9fadda-kube-api-access-4cfnt\") on node \"ci-4081.3.6-a-ca65cd0ccc\" DevicePath \"\"" Sep 12 17:46:07.347808 systemd[1]: var-lib-kubelet-pods-0dd061b5\x2d685d\x2d4d98\x2d8a24\x2dddf59d9fadda-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:46:07.480481 systemd[1]: Removed slice kubepods-besteffort-pod0dd061b5_685d_4d98_8a24_ddf59d9fadda.slice - libcontainer container kubepods-besteffort-pod0dd061b5_685d_4d98_8a24_ddf59d9fadda.slice. Sep 12 17:46:07.510255 kubelet[3184]: I0912 17:46:07.508832 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-sw4tx" podStartSLOduration=3.265598148 podStartE2EDuration="22.508813979s" podCreationTimestamp="2025-09-12 17:45:45 +0000 UTC" firstStartedPulling="2025-09-12 17:45:46.406099446 +0000 UTC m=+24.261727913" lastFinishedPulling="2025-09-12 17:46:05.649315277 +0000 UTC m=+43.504943744" observedRunningTime="2025-09-12 17:46:06.506820748 +0000 UTC m=+44.362449215" watchObservedRunningTime="2025-09-12 17:46:07.508813979 +0000 UTC m=+45.364442446" Sep 12 17:46:07.609532 systemd[1]: Created slice kubepods-besteffort-pod60ae7a3f_aa6a_4139_b96a_fdd03301f022.slice - libcontainer container kubepods-besteffort-pod60ae7a3f_aa6a_4139_b96a_fdd03301f022.slice. Sep 12 17:46:07.668972 kubelet[3184]: I0912 17:46:07.668904 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60ae7a3f-aa6a-4139-b96a-fdd03301f022-whisker-ca-bundle\") pod \"whisker-55964d555f-9vhf6\" (UID: \"60ae7a3f-aa6a-4139-b96a-fdd03301f022\") " pod="calico-system/whisker-55964d555f-9vhf6" Sep 12 17:46:07.668972 kubelet[3184]: I0912 17:46:07.668962 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rl2q\" (UniqueName: \"kubernetes.io/projected/60ae7a3f-aa6a-4139-b96a-fdd03301f022-kube-api-access-6rl2q\") pod \"whisker-55964d555f-9vhf6\" (UID: \"60ae7a3f-aa6a-4139-b96a-fdd03301f022\") " pod="calico-system/whisker-55964d555f-9vhf6" Sep 12 17:46:07.669736 kubelet[3184]: I0912 17:46:07.668985 3184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/60ae7a3f-aa6a-4139-b96a-fdd03301f022-whisker-backend-key-pair\") pod \"whisker-55964d555f-9vhf6\" (UID: \"60ae7a3f-aa6a-4139-b96a-fdd03301f022\") " pod="calico-system/whisker-55964d555f-9vhf6" Sep 12 17:46:07.915493 containerd[1714]: time="2025-09-12T17:46:07.914980155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55964d555f-9vhf6,Uid:60ae7a3f-aa6a-4139-b96a-fdd03301f022,Namespace:calico-system,Attempt:0,}" Sep 12 17:46:08.134265 kernel: bpftool[4638]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 17:46:08.136770 systemd-networkd[1576]: cali5d142b650a4: Link UP Sep 12 17:46:08.136919 systemd-networkd[1576]: cali5d142b650a4: Gained carrier Sep 12 17:46:08.157985 containerd[1714]: 2025-09-12 17:46:07.995 [INFO][4591] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:46:08.157985 containerd[1714]: 2025-09-12 17:46:08.017 [INFO][4591] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--ca65cd0ccc-k8s-whisker--55964d555f--9vhf6-eth0 whisker-55964d555f- calico-system 60ae7a3f-aa6a-4139-b96a-fdd03301f022 946 0 2025-09-12 17:46:07 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:55964d555f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-a-ca65cd0ccc whisker-55964d555f-9vhf6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5d142b650a4 [] [] }} ContainerID="d027aa289f6becf7bc8e638fd3dde937c061904bcbfad4f2bc33bcc4766f757a" Namespace="calico-system" Pod="whisker-55964d555f-9vhf6" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-whisker--55964d555f--9vhf6-" Sep 12 17:46:08.157985 containerd[1714]: 2025-09-12 17:46:08.017 [INFO][4591] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d027aa289f6becf7bc8e638fd3dde937c061904bcbfad4f2bc33bcc4766f757a" Namespace="calico-system" Pod="whisker-55964d555f-9vhf6" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-whisker--55964d555f--9vhf6-eth0" Sep 12 17:46:08.157985 containerd[1714]: 2025-09-12 17:46:08.054 [INFO][4604] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d027aa289f6becf7bc8e638fd3dde937c061904bcbfad4f2bc33bcc4766f757a" HandleID="k8s-pod-network.d027aa289f6becf7bc8e638fd3dde937c061904bcbfad4f2bc33bcc4766f757a" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-whisker--55964d555f--9vhf6-eth0" Sep 12 17:46:08.157985 containerd[1714]: 2025-09-12 17:46:08.054 [INFO][4604] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d027aa289f6becf7bc8e638fd3dde937c061904bcbfad4f2bc33bcc4766f757a" HandleID="k8s-pod-network.d027aa289f6becf7bc8e638fd3dde937c061904bcbfad4f2bc33bcc4766f757a" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-whisker--55964d555f--9vhf6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b0e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-a-ca65cd0ccc", "pod":"whisker-55964d555f-9vhf6", "timestamp":"2025-09-12 17:46:08.054775031 +0000 UTC"}, Hostname:"ci-4081.3.6-a-ca65cd0ccc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:46:08.157985 containerd[1714]: 2025-09-12 17:46:08.055 [INFO][4604] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:46:08.157985 containerd[1714]: 2025-09-12 17:46:08.055 [INFO][4604] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:46:08.157985 containerd[1714]: 2025-09-12 17:46:08.055 [INFO][4604] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-ca65cd0ccc' Sep 12 17:46:08.157985 containerd[1714]: 2025-09-12 17:46:08.066 [INFO][4604] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d027aa289f6becf7bc8e638fd3dde937c061904bcbfad4f2bc33bcc4766f757a" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:08.157985 containerd[1714]: 2025-09-12 17:46:08.072 [INFO][4604] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:08.157985 containerd[1714]: 2025-09-12 17:46:08.077 [INFO][4604] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:08.157985 containerd[1714]: 2025-09-12 17:46:08.079 [INFO][4604] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:08.157985 containerd[1714]: 2025-09-12 17:46:08.082 [INFO][4604] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:08.157985 containerd[1714]: 2025-09-12 17:46:08.082 [INFO][4604] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.d027aa289f6becf7bc8e638fd3dde937c061904bcbfad4f2bc33bcc4766f757a" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:08.157985 containerd[1714]: 2025-09-12 17:46:08.085 [INFO][4604] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d027aa289f6becf7bc8e638fd3dde937c061904bcbfad4f2bc33bcc4766f757a Sep 12 17:46:08.157985 containerd[1714]: 2025-09-12 17:46:08.097 [INFO][4604] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.d027aa289f6becf7bc8e638fd3dde937c061904bcbfad4f2bc33bcc4766f757a" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:08.157985 containerd[1714]: 2025-09-12 17:46:08.103 [INFO][4604] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.21.129/26] block=192.168.21.128/26 handle="k8s-pod-network.d027aa289f6becf7bc8e638fd3dde937c061904bcbfad4f2bc33bcc4766f757a" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:08.157985 containerd[1714]: 2025-09-12 17:46:08.103 [INFO][4604] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.129/26] handle="k8s-pod-network.d027aa289f6becf7bc8e638fd3dde937c061904bcbfad4f2bc33bcc4766f757a" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:08.157985 containerd[1714]: 2025-09-12 17:46:08.103 [INFO][4604] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:46:08.157985 containerd[1714]: 2025-09-12 17:46:08.103 [INFO][4604] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.129/26] IPv6=[] ContainerID="d027aa289f6becf7bc8e638fd3dde937c061904bcbfad4f2bc33bcc4766f757a" HandleID="k8s-pod-network.d027aa289f6becf7bc8e638fd3dde937c061904bcbfad4f2bc33bcc4766f757a" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-whisker--55964d555f--9vhf6-eth0" Sep 12 17:46:08.161166 containerd[1714]: 2025-09-12 17:46:08.106 [INFO][4591] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d027aa289f6becf7bc8e638fd3dde937c061904bcbfad4f2bc33bcc4766f757a" Namespace="calico-system" Pod="whisker-55964d555f-9vhf6" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-whisker--55964d555f--9vhf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--ca65cd0ccc-k8s-whisker--55964d555f--9vhf6-eth0", GenerateName:"whisker-55964d555f-", Namespace:"calico-system", SelfLink:"", UID:"60ae7a3f-aa6a-4139-b96a-fdd03301f022", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 46, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55964d555f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-ca65cd0ccc", ContainerID:"", Pod:"whisker-55964d555f-9vhf6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.21.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5d142b650a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:46:08.161166 containerd[1714]: 2025-09-12 17:46:08.106 [INFO][4591] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.129/32] ContainerID="d027aa289f6becf7bc8e638fd3dde937c061904bcbfad4f2bc33bcc4766f757a" Namespace="calico-system" Pod="whisker-55964d555f-9vhf6" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-whisker--55964d555f--9vhf6-eth0" Sep 12 17:46:08.161166 containerd[1714]: 2025-09-12 17:46:08.106 [INFO][4591] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5d142b650a4 ContainerID="d027aa289f6becf7bc8e638fd3dde937c061904bcbfad4f2bc33bcc4766f757a" Namespace="calico-system" Pod="whisker-55964d555f-9vhf6" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-whisker--55964d555f--9vhf6-eth0" Sep 12 17:46:08.161166 containerd[1714]: 2025-09-12 17:46:08.136 [INFO][4591] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d027aa289f6becf7bc8e638fd3dde937c061904bcbfad4f2bc33bcc4766f757a" Namespace="calico-system" Pod="whisker-55964d555f-9vhf6" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-whisker--55964d555f--9vhf6-eth0" Sep 12 17:46:08.161166 containerd[1714]: 2025-09-12 17:46:08.137 [INFO][4591] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d027aa289f6becf7bc8e638fd3dde937c061904bcbfad4f2bc33bcc4766f757a" Namespace="calico-system" Pod="whisker-55964d555f-9vhf6" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-whisker--55964d555f--9vhf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--ca65cd0ccc-k8s-whisker--55964d555f--9vhf6-eth0", GenerateName:"whisker-55964d555f-", Namespace:"calico-system", SelfLink:"", UID:"60ae7a3f-aa6a-4139-b96a-fdd03301f022", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 46, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55964d555f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-ca65cd0ccc", ContainerID:"d027aa289f6becf7bc8e638fd3dde937c061904bcbfad4f2bc33bcc4766f757a", Pod:"whisker-55964d555f-9vhf6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.21.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5d142b650a4", MAC:"c6:47:1f:ee:05:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:46:08.161166 containerd[1714]: 2025-09-12 17:46:08.152 [INFO][4591] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d027aa289f6becf7bc8e638fd3dde937c061904bcbfad4f2bc33bcc4766f757a" Namespace="calico-system" Pod="whisker-55964d555f-9vhf6" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-whisker--55964d555f--9vhf6-eth0" Sep 12 17:46:08.189987 containerd[1714]: time="2025-09-12T17:46:08.188893982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:46:08.189987 containerd[1714]: time="2025-09-12T17:46:08.188954863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:46:08.189987 containerd[1714]: time="2025-09-12T17:46:08.188979463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:46:08.189987 containerd[1714]: time="2025-09-12T17:46:08.189072263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:46:08.210399 systemd[1]: Started cri-containerd-d027aa289f6becf7bc8e638fd3dde937c061904bcbfad4f2bc33bcc4766f757a.scope - libcontainer container d027aa289f6becf7bc8e638fd3dde937c061904bcbfad4f2bc33bcc4766f757a. Sep 12 17:46:08.256243 containerd[1714]: time="2025-09-12T17:46:08.256192598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55964d555f-9vhf6,Uid:60ae7a3f-aa6a-4139-b96a-fdd03301f022,Namespace:calico-system,Attempt:0,} returns sandbox id \"d027aa289f6becf7bc8e638fd3dde937c061904bcbfad4f2bc33bcc4766f757a\"" Sep 12 17:46:08.266180 containerd[1714]: time="2025-09-12T17:46:08.265152966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:46:08.268836 kubelet[3184]: I0912 17:46:08.268798 3184 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd061b5-685d-4d98-8a24-ddf59d9fadda" path="/var/lib/kubelet/pods/0dd061b5-685d-4d98-8a24-ddf59d9fadda/volumes" Sep 12 17:46:08.420104 systemd-networkd[1576]: vxlan.calico: Link UP Sep 12 17:46:08.420115 systemd-networkd[1576]: vxlan.calico: Gained carrier Sep 12 17:46:09.632509 systemd-networkd[1576]: cali5d142b650a4: Gained IPv6LL Sep 12 17:46:10.015343 systemd-networkd[1576]: vxlan.calico: Gained IPv6LL Sep 12 17:46:10.039793 containerd[1714]: time="2025-09-12T17:46:10.039541176Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:10.051919 containerd[1714]: time="2025-09-12T17:46:10.051863587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 12 17:46:10.052140 containerd[1714]: time="2025-09-12T17:46:10.052114667Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:10.057738 containerd[1714]: time="2025-09-12T17:46:10.057682832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:10.058650 containerd[1714]: time="2025-09-12T17:46:10.058619952Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.793426226s" Sep 12 17:46:10.058758 containerd[1714]: time="2025-09-12T17:46:10.058742673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 12 17:46:10.072010 containerd[1714]: time="2025-09-12T17:46:10.071959804Z" level=info msg="CreateContainer within sandbox \"d027aa289f6becf7bc8e638fd3dde937c061904bcbfad4f2bc33bcc4766f757a\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:46:10.646914 containerd[1714]: time="2025-09-12T17:46:10.646869090Z" level=info msg="CreateContainer within sandbox \"d027aa289f6becf7bc8e638fd3dde937c061904bcbfad4f2bc33bcc4766f757a\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"3330a529fc6897da5c2ee1216d4c85d6709fbbbd9329e66dcf8699dd98193eaf\"" Sep 12 17:46:10.651574 containerd[1714]: time="2025-09-12T17:46:10.650181692Z" level=info msg="StartContainer for \"3330a529fc6897da5c2ee1216d4c85d6709fbbbd9329e66dcf8699dd98193eaf\"" Sep 12 17:46:10.688438 systemd[1]: Started cri-containerd-3330a529fc6897da5c2ee1216d4c85d6709fbbbd9329e66dcf8699dd98193eaf.scope - libcontainer container 3330a529fc6897da5c2ee1216d4c85d6709fbbbd9329e66dcf8699dd98193eaf. Sep 12 17:46:11.478644 containerd[1714]: time="2025-09-12T17:46:11.478518712Z" level=info msg="StartContainer for \"3330a529fc6897da5c2ee1216d4c85d6709fbbbd9329e66dcf8699dd98193eaf\" returns successfully" Sep 12 17:46:11.480106 containerd[1714]: time="2025-09-12T17:46:11.479874273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:46:12.265348 containerd[1714]: time="2025-09-12T17:46:12.265212537Z" level=info msg="StopPodSandbox for \"a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90\"" Sep 12 17:46:12.270143 containerd[1714]: time="2025-09-12T17:46:12.269944381Z" level=info msg="StopPodSandbox for \"157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3\"" Sep 12 17:46:12.394272 containerd[1714]: 2025-09-12 17:46:12.339 [INFO][4814] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" Sep 12 17:46:12.394272 containerd[1714]: 2025-09-12 17:46:12.339 [INFO][4814] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" iface="eth0" netns="/var/run/netns/cni-22a9d21c-86f7-723f-9706-72329f94b173" Sep 12 17:46:12.394272 containerd[1714]: 2025-09-12 17:46:12.339 [INFO][4814] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" iface="eth0" netns="/var/run/netns/cni-22a9d21c-86f7-723f-9706-72329f94b173" Sep 12 17:46:12.394272 containerd[1714]: 2025-09-12 17:46:12.340 [INFO][4814] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" iface="eth0" netns="/var/run/netns/cni-22a9d21c-86f7-723f-9706-72329f94b173" Sep 12 17:46:12.394272 containerd[1714]: 2025-09-12 17:46:12.340 [INFO][4814] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" Sep 12 17:46:12.394272 containerd[1714]: 2025-09-12 17:46:12.340 [INFO][4814] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" Sep 12 17:46:12.394272 containerd[1714]: 2025-09-12 17:46:12.374 [INFO][4831] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" HandleID="k8s-pod-network.a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-goldmane--54d579b49d--slnk5-eth0" Sep 12 17:46:12.394272 containerd[1714]: 2025-09-12 17:46:12.375 [INFO][4831] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:46:12.394272 containerd[1714]: 2025-09-12 17:46:12.375 [INFO][4831] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:46:12.394272 containerd[1714]: 2025-09-12 17:46:12.384 [WARNING][4831] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" HandleID="k8s-pod-network.a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-goldmane--54d579b49d--slnk5-eth0" Sep 12 17:46:12.394272 containerd[1714]: 2025-09-12 17:46:12.384 [INFO][4831] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" HandleID="k8s-pod-network.a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-goldmane--54d579b49d--slnk5-eth0" Sep 12 17:46:12.394272 containerd[1714]: 2025-09-12 17:46:12.386 [INFO][4831] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:46:12.394272 containerd[1714]: 2025-09-12 17:46:12.392 [INFO][4814] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" Sep 12 17:46:12.396201 containerd[1714]: time="2025-09-12T17:46:12.396152608Z" level=info msg="TearDown network for sandbox \"a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90\" successfully" Sep 12 17:46:12.397321 containerd[1714]: time="2025-09-12T17:46:12.396572008Z" level=info msg="StopPodSandbox for \"a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90\" returns successfully" Sep 12 17:46:12.399407 systemd[1]: run-netns-cni\x2d22a9d21c\x2d86f7\x2d723f\x2d9706\x2d72329f94b173.mount: Deactivated successfully. Sep 12 17:46:12.407584 containerd[1714]: time="2025-09-12T17:46:12.407346457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-slnk5,Uid:e27e3cc8-0ec3-48e6-8bfe-e729aef4366f,Namespace:calico-system,Attempt:1,}" Sep 12 17:46:12.408973 containerd[1714]: 2025-09-12 17:46:12.348 [INFO][4822] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" Sep 12 17:46:12.408973 containerd[1714]: 2025-09-12 17:46:12.348 [INFO][4822] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" iface="eth0" netns="/var/run/netns/cni-211c401d-b70f-1bc7-ba32-f140b924430f" Sep 12 17:46:12.408973 containerd[1714]: 2025-09-12 17:46:12.348 [INFO][4822] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" iface="eth0" netns="/var/run/netns/cni-211c401d-b70f-1bc7-ba32-f140b924430f" Sep 12 17:46:12.408973 containerd[1714]: 2025-09-12 17:46:12.349 [INFO][4822] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" iface="eth0" netns="/var/run/netns/cni-211c401d-b70f-1bc7-ba32-f140b924430f" Sep 12 17:46:12.408973 containerd[1714]: 2025-09-12 17:46:12.349 [INFO][4822] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" Sep 12 17:46:12.408973 containerd[1714]: 2025-09-12 17:46:12.350 [INFO][4822] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" Sep 12 17:46:12.408973 containerd[1714]: 2025-09-12 17:46:12.377 [INFO][4836] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" HandleID="k8s-pod-network.157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--kbs2p-eth0" Sep 12 17:46:12.408973 containerd[1714]: 2025-09-12 17:46:12.378 [INFO][4836] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:46:12.408973 containerd[1714]: 2025-09-12 17:46:12.386 [INFO][4836] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:46:12.408973 containerd[1714]: 2025-09-12 17:46:12.400 [WARNING][4836] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" HandleID="k8s-pod-network.157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--kbs2p-eth0" Sep 12 17:46:12.408973 containerd[1714]: 2025-09-12 17:46:12.400 [INFO][4836] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" HandleID="k8s-pod-network.157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--kbs2p-eth0" Sep 12 17:46:12.408973 containerd[1714]: 2025-09-12 17:46:12.403 [INFO][4836] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:46:12.408973 containerd[1714]: 2025-09-12 17:46:12.406 [INFO][4822] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" Sep 12 17:46:12.409584 containerd[1714]: time="2025-09-12T17:46:12.409374659Z" level=info msg="TearDown network for sandbox \"157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3\" successfully" Sep 12 17:46:12.409584 containerd[1714]: time="2025-09-12T17:46:12.409401379Z" level=info msg="StopPodSandbox for \"157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3\" returns successfully" Sep 12 17:46:12.410910 containerd[1714]: time="2025-09-12T17:46:12.410568100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kbs2p,Uid:fcbac6f2-0b6a-4a25-b5bf-41bd3c3b4f83,Namespace:kube-system,Attempt:1,}" Sep 12 17:46:12.413874 systemd[1]: run-netns-cni\x2d211c401d\x2db70f\x2d1bc7\x2dba32\x2df140b924430f.mount: Deactivated successfully. Sep 12 17:46:12.656934 systemd-networkd[1576]: cali8eb75dc1396: Link UP Sep 12 17:46:12.657102 systemd-networkd[1576]: cali8eb75dc1396: Gained carrier Sep 12 17:46:12.684828 containerd[1714]: 2025-09-12 17:46:12.556 [INFO][4858] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--ca65cd0ccc-k8s-goldmane--54d579b49d--slnk5-eth0 goldmane-54d579b49d- calico-system e27e3cc8-0ec3-48e6-8bfe-e729aef4366f 967 0 2025-09-12 17:45:45 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-a-ca65cd0ccc goldmane-54d579b49d-slnk5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8eb75dc1396 [] [] }} ContainerID="58d382360a2f9dea4c3e2cb5687701cf51ec343d8ff800f1d42bb97cca156fb8" Namespace="calico-system" Pod="goldmane-54d579b49d-slnk5" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-goldmane--54d579b49d--slnk5-" Sep 12 17:46:12.684828 containerd[1714]: 2025-09-12 17:46:12.556 [INFO][4858] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="58d382360a2f9dea4c3e2cb5687701cf51ec343d8ff800f1d42bb97cca156fb8" Namespace="calico-system" Pod="goldmane-54d579b49d-slnk5" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-goldmane--54d579b49d--slnk5-eth0" Sep 12 17:46:12.684828 containerd[1714]: 2025-09-12 17:46:12.594 [INFO][4875] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58d382360a2f9dea4c3e2cb5687701cf51ec343d8ff800f1d42bb97cca156fb8" HandleID="k8s-pod-network.58d382360a2f9dea4c3e2cb5687701cf51ec343d8ff800f1d42bb97cca156fb8" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-goldmane--54d579b49d--slnk5-eth0" Sep 12 17:46:12.684828 containerd[1714]: 2025-09-12 17:46:12.594 [INFO][4875] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="58d382360a2f9dea4c3e2cb5687701cf51ec343d8ff800f1d42bb97cca156fb8" HandleID="k8s-pod-network.58d382360a2f9dea4c3e2cb5687701cf51ec343d8ff800f1d42bb97cca156fb8" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-goldmane--54d579b49d--slnk5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-a-ca65cd0ccc", "pod":"goldmane-54d579b49d-slnk5", "timestamp":"2025-09-12 17:46:12.593754615 +0000 UTC"}, Hostname:"ci-4081.3.6-a-ca65cd0ccc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:46:12.684828 containerd[1714]: 2025-09-12 17:46:12.594 [INFO][4875] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:46:12.684828 containerd[1714]: 2025-09-12 17:46:12.594 [INFO][4875] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:46:12.684828 containerd[1714]: 2025-09-12 17:46:12.594 [INFO][4875] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-ca65cd0ccc' Sep 12 17:46:12.684828 containerd[1714]: 2025-09-12 17:46:12.607 [INFO][4875] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.58d382360a2f9dea4c3e2cb5687701cf51ec343d8ff800f1d42bb97cca156fb8" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:12.684828 containerd[1714]: 2025-09-12 17:46:12.614 [INFO][4875] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:12.684828 containerd[1714]: 2025-09-12 17:46:12.621 [INFO][4875] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:12.684828 containerd[1714]: 2025-09-12 17:46:12.624 [INFO][4875] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:12.684828 containerd[1714]: 2025-09-12 17:46:12.627 [INFO][4875] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:12.684828 containerd[1714]: 2025-09-12 17:46:12.627 [INFO][4875] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.58d382360a2f9dea4c3e2cb5687701cf51ec343d8ff800f1d42bb97cca156fb8" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:12.684828 containerd[1714]: 2025-09-12 17:46:12.629 [INFO][4875] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.58d382360a2f9dea4c3e2cb5687701cf51ec343d8ff800f1d42bb97cca156fb8 Sep 12 17:46:12.684828 containerd[1714]: 2025-09-12 17:46:12.635 [INFO][4875] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.58d382360a2f9dea4c3e2cb5687701cf51ec343d8ff800f1d42bb97cca156fb8" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:12.684828 containerd[1714]: 2025-09-12 17:46:12.647 [INFO][4875] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.21.130/26] block=192.168.21.128/26 handle="k8s-pod-network.58d382360a2f9dea4c3e2cb5687701cf51ec343d8ff800f1d42bb97cca156fb8" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:12.684828 containerd[1714]: 2025-09-12 17:46:12.647 [INFO][4875] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.130/26] handle="k8s-pod-network.58d382360a2f9dea4c3e2cb5687701cf51ec343d8ff800f1d42bb97cca156fb8" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:12.684828 containerd[1714]: 2025-09-12 17:46:12.647 [INFO][4875] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:46:12.684828 containerd[1714]: 2025-09-12 17:46:12.648 [INFO][4875] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.130/26] IPv6=[] ContainerID="58d382360a2f9dea4c3e2cb5687701cf51ec343d8ff800f1d42bb97cca156fb8" HandleID="k8s-pod-network.58d382360a2f9dea4c3e2cb5687701cf51ec343d8ff800f1d42bb97cca156fb8" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-goldmane--54d579b49d--slnk5-eth0" Sep 12 17:46:12.685633 containerd[1714]: 2025-09-12 17:46:12.651 [INFO][4858] cni-plugin/k8s.go 418: Populated endpoint ContainerID="58d382360a2f9dea4c3e2cb5687701cf51ec343d8ff800f1d42bb97cca156fb8" Namespace="calico-system" Pod="goldmane-54d579b49d-slnk5" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-goldmane--54d579b49d--slnk5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--ca65cd0ccc-k8s-goldmane--54d579b49d--slnk5-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"e27e3cc8-0ec3-48e6-8bfe-e729aef4366f", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 45, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-ca65cd0ccc", ContainerID:"", Pod:"goldmane-54d579b49d-slnk5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.21.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8eb75dc1396", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:46:12.685633 containerd[1714]: 2025-09-12 17:46:12.651 [INFO][4858] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.130/32] ContainerID="58d382360a2f9dea4c3e2cb5687701cf51ec343d8ff800f1d42bb97cca156fb8" Namespace="calico-system" Pod="goldmane-54d579b49d-slnk5" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-goldmane--54d579b49d--slnk5-eth0" Sep 12 17:46:12.685633 containerd[1714]: 2025-09-12 17:46:12.651 [INFO][4858] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8eb75dc1396 ContainerID="58d382360a2f9dea4c3e2cb5687701cf51ec343d8ff800f1d42bb97cca156fb8" Namespace="calico-system" Pod="goldmane-54d579b49d-slnk5" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-goldmane--54d579b49d--slnk5-eth0" Sep 12 17:46:12.685633 containerd[1714]: 2025-09-12 17:46:12.659 [INFO][4858] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="58d382360a2f9dea4c3e2cb5687701cf51ec343d8ff800f1d42bb97cca156fb8" Namespace="calico-system" Pod="goldmane-54d579b49d-slnk5" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-goldmane--54d579b49d--slnk5-eth0" Sep 12 17:46:12.685633 containerd[1714]: 2025-09-12 17:46:12.660 [INFO][4858] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="58d382360a2f9dea4c3e2cb5687701cf51ec343d8ff800f1d42bb97cca156fb8" Namespace="calico-system" Pod="goldmane-54d579b49d-slnk5" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-goldmane--54d579b49d--slnk5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--ca65cd0ccc-k8s-goldmane--54d579b49d--slnk5-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"e27e3cc8-0ec3-48e6-8bfe-e729aef4366f", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 45, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-ca65cd0ccc", ContainerID:"58d382360a2f9dea4c3e2cb5687701cf51ec343d8ff800f1d42bb97cca156fb8", Pod:"goldmane-54d579b49d-slnk5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.21.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8eb75dc1396", MAC:"8e:cd:36:82:69:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:46:12.685633 containerd[1714]: 2025-09-12 17:46:12.682 [INFO][4858] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="58d382360a2f9dea4c3e2cb5687701cf51ec343d8ff800f1d42bb97cca156fb8" Namespace="calico-system" Pod="goldmane-54d579b49d-slnk5" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-goldmane--54d579b49d--slnk5-eth0" Sep 12 17:46:12.736073 containerd[1714]: time="2025-09-12T17:46:12.735960575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:46:12.736384 containerd[1714]: time="2025-09-12T17:46:12.736250735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:46:12.736459 containerd[1714]: time="2025-09-12T17:46:12.736293295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:46:12.736684 containerd[1714]: time="2025-09-12T17:46:12.736653855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:46:12.758612 systemd[1]: Started cri-containerd-58d382360a2f9dea4c3e2cb5687701cf51ec343d8ff800f1d42bb97cca156fb8.scope - libcontainer container 58d382360a2f9dea4c3e2cb5687701cf51ec343d8ff800f1d42bb97cca156fb8. Sep 12 17:46:12.785105 systemd-networkd[1576]: calicaa8ae87a22: Link UP Sep 12 17:46:12.785516 systemd-networkd[1576]: calicaa8ae87a22: Gained carrier Sep 12 17:46:12.827535 containerd[1714]: 2025-09-12 17:46:12.547 [INFO][4845] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--kbs2p-eth0 coredns-674b8bbfcf- kube-system fcbac6f2-0b6a-4a25-b5bf-41bd3c3b4f83 968 0 2025-09-12 17:45:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-a-ca65cd0ccc coredns-674b8bbfcf-kbs2p eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicaa8ae87a22 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8b0ff27af033a301b84729dcd8d8def567049183c52fb5d65db01073bcb60a9d" Namespace="kube-system" Pod="coredns-674b8bbfcf-kbs2p" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--kbs2p-" Sep 12 17:46:12.827535 containerd[1714]: 2025-09-12 17:46:12.547 [INFO][4845] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8b0ff27af033a301b84729dcd8d8def567049183c52fb5d65db01073bcb60a9d" Namespace="kube-system" Pod="coredns-674b8bbfcf-kbs2p" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--kbs2p-eth0" Sep 12 17:46:12.827535 containerd[1714]: 2025-09-12 17:46:12.597 [INFO][4870] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b0ff27af033a301b84729dcd8d8def567049183c52fb5d65db01073bcb60a9d" HandleID="k8s-pod-network.8b0ff27af033a301b84729dcd8d8def567049183c52fb5d65db01073bcb60a9d" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--kbs2p-eth0" Sep 12 17:46:12.827535 containerd[1714]: 2025-09-12 17:46:12.599 [INFO][4870] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8b0ff27af033a301b84729dcd8d8def567049183c52fb5d65db01073bcb60a9d" HandleID="k8s-pod-network.8b0ff27af033a301b84729dcd8d8def567049183c52fb5d65db01073bcb60a9d" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--kbs2p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3130), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-a-ca65cd0ccc", "pod":"coredns-674b8bbfcf-kbs2p", "timestamp":"2025-09-12 17:46:12.597197778 +0000 UTC"}, Hostname:"ci-4081.3.6-a-ca65cd0ccc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:46:12.827535 containerd[1714]: 2025-09-12 17:46:12.599 [INFO][4870] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:46:12.827535 containerd[1714]: 2025-09-12 17:46:12.648 [INFO][4870] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:46:12.827535 containerd[1714]: 2025-09-12 17:46:12.648 [INFO][4870] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-ca65cd0ccc' Sep 12 17:46:12.827535 containerd[1714]: 2025-09-12 17:46:12.707 [INFO][4870] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8b0ff27af033a301b84729dcd8d8def567049183c52fb5d65db01073bcb60a9d" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:12.827535 containerd[1714]: 2025-09-12 17:46:12.713 [INFO][4870] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:12.827535 containerd[1714]: 2025-09-12 17:46:12.722 [INFO][4870] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:12.827535 containerd[1714]: 2025-09-12 17:46:12.724 [INFO][4870] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:12.827535 containerd[1714]: 2025-09-12 17:46:12.729 [INFO][4870] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:12.827535 containerd[1714]: 2025-09-12 17:46:12.729 [INFO][4870] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.8b0ff27af033a301b84729dcd8d8def567049183c52fb5d65db01073bcb60a9d" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:12.827535 containerd[1714]: 2025-09-12 17:46:12.734 [INFO][4870] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8b0ff27af033a301b84729dcd8d8def567049183c52fb5d65db01073bcb60a9d Sep 12 17:46:12.827535 containerd[1714]: 2025-09-12 17:46:12.745 [INFO][4870] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.8b0ff27af033a301b84729dcd8d8def567049183c52fb5d65db01073bcb60a9d" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:12.827535 containerd[1714]: 2025-09-12 17:46:12.759 [INFO][4870] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.21.131/26] block=192.168.21.128/26 handle="k8s-pod-network.8b0ff27af033a301b84729dcd8d8def567049183c52fb5d65db01073bcb60a9d" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:12.827535 containerd[1714]: 2025-09-12 17:46:12.759 [INFO][4870] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.131/26] handle="k8s-pod-network.8b0ff27af033a301b84729dcd8d8def567049183c52fb5d65db01073bcb60a9d" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:12.827535 containerd[1714]: 2025-09-12 17:46:12.759 [INFO][4870] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:46:12.827535 containerd[1714]: 2025-09-12 17:46:12.759 [INFO][4870] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.131/26] IPv6=[] ContainerID="8b0ff27af033a301b84729dcd8d8def567049183c52fb5d65db01073bcb60a9d" HandleID="k8s-pod-network.8b0ff27af033a301b84729dcd8d8def567049183c52fb5d65db01073bcb60a9d" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--kbs2p-eth0" Sep 12 17:46:12.828680 containerd[1714]: 2025-09-12 17:46:12.764 [INFO][4845] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8b0ff27af033a301b84729dcd8d8def567049183c52fb5d65db01073bcb60a9d" Namespace="kube-system" Pod="coredns-674b8bbfcf-kbs2p" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--kbs2p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--kbs2p-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fcbac6f2-0b6a-4a25-b5bf-41bd3c3b4f83", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 45, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-ca65cd0ccc", ContainerID:"", Pod:"coredns-674b8bbfcf-kbs2p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicaa8ae87a22", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:46:12.828680 containerd[1714]: 2025-09-12 17:46:12.765 [INFO][4845] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.131/32] ContainerID="8b0ff27af033a301b84729dcd8d8def567049183c52fb5d65db01073bcb60a9d" Namespace="kube-system" Pod="coredns-674b8bbfcf-kbs2p" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--kbs2p-eth0" Sep 12 17:46:12.828680 containerd[1714]: 2025-09-12 17:46:12.765 [INFO][4845] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicaa8ae87a22 ContainerID="8b0ff27af033a301b84729dcd8d8def567049183c52fb5d65db01073bcb60a9d" Namespace="kube-system" Pod="coredns-674b8bbfcf-kbs2p" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--kbs2p-eth0" Sep 12 17:46:12.828680 containerd[1714]: 2025-09-12 17:46:12.778 [INFO][4845] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8b0ff27af033a301b84729dcd8d8def567049183c52fb5d65db01073bcb60a9d" Namespace="kube-system" Pod="coredns-674b8bbfcf-kbs2p" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--kbs2p-eth0" Sep 12 17:46:12.828680 containerd[1714]: 2025-09-12 17:46:12.782 [INFO][4845] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8b0ff27af033a301b84729dcd8d8def567049183c52fb5d65db01073bcb60a9d" Namespace="kube-system" Pod="coredns-674b8bbfcf-kbs2p" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--kbs2p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--kbs2p-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fcbac6f2-0b6a-4a25-b5bf-41bd3c3b4f83", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 45, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-ca65cd0ccc", ContainerID:"8b0ff27af033a301b84729dcd8d8def567049183c52fb5d65db01073bcb60a9d", Pod:"coredns-674b8bbfcf-kbs2p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicaa8ae87a22", MAC:"8e:00:12:e6:3d:3d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:46:12.828680 containerd[1714]: 2025-09-12 17:46:12.819 [INFO][4845] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8b0ff27af033a301b84729dcd8d8def567049183c52fb5d65db01073bcb60a9d" Namespace="kube-system" Pod="coredns-674b8bbfcf-kbs2p" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--kbs2p-eth0" Sep 12 17:46:12.877633 containerd[1714]: time="2025-09-12T17:46:12.877560694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-slnk5,Uid:e27e3cc8-0ec3-48e6-8bfe-e729aef4366f,Namespace:calico-system,Attempt:1,} returns sandbox id \"58d382360a2f9dea4c3e2cb5687701cf51ec343d8ff800f1d42bb97cca156fb8\"" Sep 12 17:46:12.882344 containerd[1714]: time="2025-09-12T17:46:12.881913258Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:46:12.882344 containerd[1714]: time="2025-09-12T17:46:12.882030018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:46:12.882344 containerd[1714]: time="2025-09-12T17:46:12.882046538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:46:12.882344 containerd[1714]: time="2025-09-12T17:46:12.882176138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:46:12.921485 systemd[1]: Started cri-containerd-8b0ff27af033a301b84729dcd8d8def567049183c52fb5d65db01073bcb60a9d.scope - libcontainer container 8b0ff27af033a301b84729dcd8d8def567049183c52fb5d65db01073bcb60a9d. Sep 12 17:46:12.972628 containerd[1714]: time="2025-09-12T17:46:12.972581895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kbs2p,Uid:fcbac6f2-0b6a-4a25-b5bf-41bd3c3b4f83,Namespace:kube-system,Attempt:1,} returns sandbox id \"8b0ff27af033a301b84729dcd8d8def567049183c52fb5d65db01073bcb60a9d\"" Sep 12 17:46:12.996031 containerd[1714]: time="2025-09-12T17:46:12.995866154Z" level=info msg="CreateContainer within sandbox \"8b0ff27af033a301b84729dcd8d8def567049183c52fb5d65db01073bcb60a9d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:46:13.055335 containerd[1714]: time="2025-09-12T17:46:13.055178685Z" level=info msg="CreateContainer within sandbox \"8b0ff27af033a301b84729dcd8d8def567049183c52fb5d65db01073bcb60a9d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9822e01483f1ef35bee1ffb95c4cb0d988259e6ec8713061261f6532c83341ad\"" Sep 12 17:46:13.056079 containerd[1714]: time="2025-09-12T17:46:13.056001485Z" level=info msg="StartContainer for \"9822e01483f1ef35bee1ffb95c4cb0d988259e6ec8713061261f6532c83341ad\"" Sep 12 17:46:13.083408 systemd[1]: Started cri-containerd-9822e01483f1ef35bee1ffb95c4cb0d988259e6ec8713061261f6532c83341ad.scope - libcontainer container 9822e01483f1ef35bee1ffb95c4cb0d988259e6ec8713061261f6532c83341ad. Sep 12 17:46:13.111691 containerd[1714]: time="2025-09-12T17:46:13.111555492Z" level=info msg="StartContainer for \"9822e01483f1ef35bee1ffb95c4cb0d988259e6ec8713061261f6532c83341ad\" returns successfully" Sep 12 17:46:13.269423 containerd[1714]: time="2025-09-12T17:46:13.268699105Z" level=info msg="StopPodSandbox for \"4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46\"" Sep 12 17:46:13.269915 containerd[1714]: time="2025-09-12T17:46:13.269793946Z" level=info msg="StopPodSandbox for \"6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a\"" Sep 12 17:46:13.270619 containerd[1714]: time="2025-09-12T17:46:13.270523147Z" level=info msg="StopPodSandbox for \"173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e\"" Sep 12 17:46:13.458352 containerd[1714]: 2025-09-12 17:46:13.358 [INFO][5050] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" Sep 12 17:46:13.458352 containerd[1714]: 2025-09-12 17:46:13.359 [INFO][5050] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" iface="eth0" netns="/var/run/netns/cni-ef526712-6b55-a520-3dd4-2a3a0f82daa8" Sep 12 17:46:13.458352 containerd[1714]: 2025-09-12 17:46:13.359 [INFO][5050] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" iface="eth0" netns="/var/run/netns/cni-ef526712-6b55-a520-3dd4-2a3a0f82daa8" Sep 12 17:46:13.458352 containerd[1714]: 2025-09-12 17:46:13.361 [INFO][5050] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" iface="eth0" netns="/var/run/netns/cni-ef526712-6b55-a520-3dd4-2a3a0f82daa8" Sep 12 17:46:13.458352 containerd[1714]: 2025-09-12 17:46:13.361 [INFO][5050] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" Sep 12 17:46:13.458352 containerd[1714]: 2025-09-12 17:46:13.361 [INFO][5050] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" Sep 12 17:46:13.458352 containerd[1714]: 2025-09-12 17:46:13.430 [INFO][5066] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" HandleID="k8s-pod-network.4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--5bhwh-eth0" Sep 12 17:46:13.458352 containerd[1714]: 2025-09-12 17:46:13.432 [INFO][5066] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:46:13.458352 containerd[1714]: 2025-09-12 17:46:13.432 [INFO][5066] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:46:13.458352 containerd[1714]: 2025-09-12 17:46:13.444 [WARNING][5066] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" HandleID="k8s-pod-network.4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--5bhwh-eth0" Sep 12 17:46:13.458352 containerd[1714]: 2025-09-12 17:46:13.444 [INFO][5066] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" HandleID="k8s-pod-network.4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--5bhwh-eth0" Sep 12 17:46:13.458352 containerd[1714]: 2025-09-12 17:46:13.450 [INFO][5066] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:46:13.458352 containerd[1714]: 2025-09-12 17:46:13.454 [INFO][5050] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" Sep 12 17:46:13.461318 containerd[1714]: time="2025-09-12T17:46:13.459042346Z" level=info msg="TearDown network for sandbox \"4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46\" successfully" Sep 12 17:46:13.461318 containerd[1714]: time="2025-09-12T17:46:13.459074546Z" level=info msg="StopPodSandbox for \"4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46\" returns successfully" Sep 12 17:46:13.463047 systemd[1]: run-netns-cni\x2def526712\x2d6b55\x2da520\x2d3dd4\x2d2a3a0f82daa8.mount: Deactivated successfully. Sep 12 17:46:13.467444 containerd[1714]: time="2025-09-12T17:46:13.467357153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-868c8dbd57-5bhwh,Uid:faa47950-5ad9-4518-9026-0ce0997471e9,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:46:13.506686 containerd[1714]: 2025-09-12 17:46:13.435 [INFO][5058] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" Sep 12 17:46:13.506686 containerd[1714]: 2025-09-12 17:46:13.435 [INFO][5058] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" iface="eth0" netns="/var/run/netns/cni-27e747ba-7183-865b-5225-9e9c8f7b3891" Sep 12 17:46:13.506686 containerd[1714]: 2025-09-12 17:46:13.435 [INFO][5058] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" iface="eth0" netns="/var/run/netns/cni-27e747ba-7183-865b-5225-9e9c8f7b3891" Sep 12 17:46:13.506686 containerd[1714]: 2025-09-12 17:46:13.438 [INFO][5058] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" iface="eth0" netns="/var/run/netns/cni-27e747ba-7183-865b-5225-9e9c8f7b3891" Sep 12 17:46:13.506686 containerd[1714]: 2025-09-12 17:46:13.438 [INFO][5058] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" Sep 12 17:46:13.506686 containerd[1714]: 2025-09-12 17:46:13.438 [INFO][5058] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" Sep 12 17:46:13.506686 containerd[1714]: 2025-09-12 17:46:13.474 [INFO][5082] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" HandleID="k8s-pod-network.173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--s6vg4-eth0" Sep 12 17:46:13.506686 containerd[1714]: 2025-09-12 17:46:13.475 [INFO][5082] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:46:13.506686 containerd[1714]: 2025-09-12 17:46:13.475 [INFO][5082] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:46:13.506686 containerd[1714]: 2025-09-12 17:46:13.490 [WARNING][5082] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" HandleID="k8s-pod-network.173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--s6vg4-eth0" Sep 12 17:46:13.506686 containerd[1714]: 2025-09-12 17:46:13.490 [INFO][5082] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" HandleID="k8s-pod-network.173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--s6vg4-eth0" Sep 12 17:46:13.506686 containerd[1714]: 2025-09-12 17:46:13.494 [INFO][5082] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:46:13.506686 containerd[1714]: 2025-09-12 17:46:13.500 [INFO][5058] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" Sep 12 17:46:13.514759 containerd[1714]: time="2025-09-12T17:46:13.513437192Z" level=info msg="TearDown network for sandbox \"173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e\" successfully" Sep 12 17:46:13.514759 containerd[1714]: time="2025-09-12T17:46:13.513483832Z" level=info msg="StopPodSandbox for \"173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e\" returns successfully" Sep 12 17:46:13.514211 systemd[1]: run-netns-cni\x2d27e747ba\x2d7183\x2d865b\x2d5225\x2d9e9c8f7b3891.mount: Deactivated successfully. Sep 12 17:46:13.520328 containerd[1714]: time="2025-09-12T17:46:13.518039836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-868c8dbd57-s6vg4,Uid:bbd20681-1f6f-410b-806a-7fe01c4c3226,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:46:13.543829 containerd[1714]: 2025-09-12 17:46:13.425 [INFO][5054] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" Sep 12 17:46:13.543829 containerd[1714]: 2025-09-12 17:46:13.425 [INFO][5054] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" iface="eth0" netns="/var/run/netns/cni-0b61ffb2-ebac-dec7-e077-f8b92a17b6f8" Sep 12 17:46:13.543829 containerd[1714]: 2025-09-12 17:46:13.425 [INFO][5054] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" iface="eth0" netns="/var/run/netns/cni-0b61ffb2-ebac-dec7-e077-f8b92a17b6f8" Sep 12 17:46:13.543829 containerd[1714]: 2025-09-12 17:46:13.426 [INFO][5054] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" iface="eth0" netns="/var/run/netns/cni-0b61ffb2-ebac-dec7-e077-f8b92a17b6f8" Sep 12 17:46:13.543829 containerd[1714]: 2025-09-12 17:46:13.426 [INFO][5054] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" Sep 12 17:46:13.543829 containerd[1714]: 2025-09-12 17:46:13.426 [INFO][5054] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" Sep 12 17:46:13.543829 containerd[1714]: 2025-09-12 17:46:13.483 [INFO][5076] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" HandleID="k8s-pod-network.6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-csi--node--driver--jjztm-eth0" Sep 12 17:46:13.543829 containerd[1714]: 2025-09-12 17:46:13.485 [INFO][5076] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:46:13.543829 containerd[1714]: 2025-09-12 17:46:13.496 [INFO][5076] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:46:13.543829 containerd[1714]: 2025-09-12 17:46:13.521 [WARNING][5076] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" HandleID="k8s-pod-network.6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-csi--node--driver--jjztm-eth0" Sep 12 17:46:13.543829 containerd[1714]: 2025-09-12 17:46:13.521 [INFO][5076] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" HandleID="k8s-pod-network.6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-csi--node--driver--jjztm-eth0" Sep 12 17:46:13.543829 containerd[1714]: 2025-09-12 17:46:13.531 [INFO][5076] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:46:13.543829 containerd[1714]: 2025-09-12 17:46:13.539 [INFO][5054] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" Sep 12 17:46:13.544750 containerd[1714]: time="2025-09-12T17:46:13.544229338Z" level=info msg="TearDown network for sandbox \"6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a\" successfully" Sep 12 17:46:13.544750 containerd[1714]: time="2025-09-12T17:46:13.544320898Z" level=info msg="StopPodSandbox for \"6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a\" returns successfully" Sep 12 17:46:13.549127 containerd[1714]: time="2025-09-12T17:46:13.548624342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jjztm,Uid:64c67752-89ee-4f26-b63e-b37e41be4790,Namespace:calico-system,Attempt:1,}" Sep 12 17:46:13.560575 kubelet[3184]: I0912 17:46:13.559899 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-kbs2p" podStartSLOduration=45.559880551 podStartE2EDuration="45.559880551s" podCreationTimestamp="2025-09-12 17:45:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:46:13.557722829 +0000 UTC m=+51.413351296" watchObservedRunningTime="2025-09-12 17:46:13.559880551 +0000 UTC m=+51.415509018" Sep 12 17:46:13.705972 systemd-networkd[1576]: cali3a2f95f53a1: Link UP Sep 12 17:46:13.707096 systemd-networkd[1576]: cali3a2f95f53a1: Gained carrier Sep 12 17:46:13.732697 containerd[1714]: 2025-09-12 17:46:13.626 [INFO][5091] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--5bhwh-eth0 calico-apiserver-868c8dbd57- calico-apiserver faa47950-5ad9-4518-9026-0ce0997471e9 985 0 2025-09-12 17:45:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:868c8dbd57 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-a-ca65cd0ccc calico-apiserver-868c8dbd57-5bhwh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3a2f95f53a1 [] [] }} ContainerID="5c099de3b4c477613d344486fcc56b518263e741c6a5e88e11b123a2ee86ea9b" Namespace="calico-apiserver" Pod="calico-apiserver-868c8dbd57-5bhwh" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--5bhwh-" Sep 12 17:46:13.732697 containerd[1714]: 2025-09-12 17:46:13.626 [INFO][5091] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5c099de3b4c477613d344486fcc56b518263e741c6a5e88e11b123a2ee86ea9b" Namespace="calico-apiserver" Pod="calico-apiserver-868c8dbd57-5bhwh" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--5bhwh-eth0" Sep 12 17:46:13.732697 containerd[1714]: 2025-09-12 17:46:13.654 [INFO][5105] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c099de3b4c477613d344486fcc56b518263e741c6a5e88e11b123a2ee86ea9b" HandleID="k8s-pod-network.5c099de3b4c477613d344486fcc56b518263e741c6a5e88e11b123a2ee86ea9b" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--5bhwh-eth0" Sep 12 17:46:13.732697 containerd[1714]: 2025-09-12 17:46:13.655 [INFO][5105] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5c099de3b4c477613d344486fcc56b518263e741c6a5e88e11b123a2ee86ea9b" HandleID="k8s-pod-network.5c099de3b4c477613d344486fcc56b518263e741c6a5e88e11b123a2ee86ea9b" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--5bhwh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024aff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-a-ca65cd0ccc", "pod":"calico-apiserver-868c8dbd57-5bhwh", "timestamp":"2025-09-12 17:46:13.654724231 +0000 UTC"}, Hostname:"ci-4081.3.6-a-ca65cd0ccc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:46:13.732697 containerd[1714]: 2025-09-12 17:46:13.655 [INFO][5105] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:46:13.732697 containerd[1714]: 2025-09-12 17:46:13.655 [INFO][5105] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:46:13.732697 containerd[1714]: 2025-09-12 17:46:13.655 [INFO][5105] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-ca65cd0ccc' Sep 12 17:46:13.732697 containerd[1714]: 2025-09-12 17:46:13.666 [INFO][5105] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5c099de3b4c477613d344486fcc56b518263e741c6a5e88e11b123a2ee86ea9b" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:13.732697 containerd[1714]: 2025-09-12 17:46:13.671 [INFO][5105] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:13.732697 containerd[1714]: 2025-09-12 17:46:13.675 [INFO][5105] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:13.732697 containerd[1714]: 2025-09-12 17:46:13.677 [INFO][5105] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:13.732697 containerd[1714]: 2025-09-12 17:46:13.679 [INFO][5105] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:13.732697 containerd[1714]: 2025-09-12 17:46:13.679 [INFO][5105] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.5c099de3b4c477613d344486fcc56b518263e741c6a5e88e11b123a2ee86ea9b" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:13.732697 containerd[1714]: 2025-09-12 17:46:13.682 [INFO][5105] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5c099de3b4c477613d344486fcc56b518263e741c6a5e88e11b123a2ee86ea9b Sep 12 17:46:13.732697 containerd[1714]: 2025-09-12 17:46:13.687 [INFO][5105] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.5c099de3b4c477613d344486fcc56b518263e741c6a5e88e11b123a2ee86ea9b" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:13.732697 containerd[1714]: 2025-09-12 17:46:13.698 [INFO][5105] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.21.132/26] block=192.168.21.128/26 handle="k8s-pod-network.5c099de3b4c477613d344486fcc56b518263e741c6a5e88e11b123a2ee86ea9b" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:13.732697 containerd[1714]: 2025-09-12 17:46:13.698 [INFO][5105] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.132/26] handle="k8s-pod-network.5c099de3b4c477613d344486fcc56b518263e741c6a5e88e11b123a2ee86ea9b" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:13.732697 containerd[1714]: 2025-09-12 17:46:13.698 [INFO][5105] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:46:13.732697 containerd[1714]: 2025-09-12 17:46:13.698 [INFO][5105] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.132/26] IPv6=[] ContainerID="5c099de3b4c477613d344486fcc56b518263e741c6a5e88e11b123a2ee86ea9b" HandleID="k8s-pod-network.5c099de3b4c477613d344486fcc56b518263e741c6a5e88e11b123a2ee86ea9b" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--5bhwh-eth0" Sep 12 17:46:13.735219 containerd[1714]: 2025-09-12 17:46:13.700 [INFO][5091] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5c099de3b4c477613d344486fcc56b518263e741c6a5e88e11b123a2ee86ea9b" Namespace="calico-apiserver" Pod="calico-apiserver-868c8dbd57-5bhwh" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--5bhwh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--5bhwh-eth0", GenerateName:"calico-apiserver-868c8dbd57-", Namespace:"calico-apiserver", SelfLink:"", UID:"faa47950-5ad9-4518-9026-0ce0997471e9", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 45, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"868c8dbd57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-ca65cd0ccc", ContainerID:"", Pod:"calico-apiserver-868c8dbd57-5bhwh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3a2f95f53a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:46:13.735219 containerd[1714]: 2025-09-12 17:46:13.700 [INFO][5091] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.132/32] ContainerID="5c099de3b4c477613d344486fcc56b518263e741c6a5e88e11b123a2ee86ea9b" Namespace="calico-apiserver" Pod="calico-apiserver-868c8dbd57-5bhwh" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--5bhwh-eth0" Sep 12 17:46:13.735219 containerd[1714]: 2025-09-12 17:46:13.700 [INFO][5091] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3a2f95f53a1 ContainerID="5c099de3b4c477613d344486fcc56b518263e741c6a5e88e11b123a2ee86ea9b" Namespace="calico-apiserver" Pod="calico-apiserver-868c8dbd57-5bhwh" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--5bhwh-eth0" Sep 12 17:46:13.735219 containerd[1714]: 2025-09-12 17:46:13.708 [INFO][5091] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5c099de3b4c477613d344486fcc56b518263e741c6a5e88e11b123a2ee86ea9b" Namespace="calico-apiserver" Pod="calico-apiserver-868c8dbd57-5bhwh" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--5bhwh-eth0" Sep 12 17:46:13.735219 containerd[1714]: 2025-09-12 17:46:13.708 [INFO][5091] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5c099de3b4c477613d344486fcc56b518263e741c6a5e88e11b123a2ee86ea9b" Namespace="calico-apiserver" Pod="calico-apiserver-868c8dbd57-5bhwh" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--5bhwh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--5bhwh-eth0", GenerateName:"calico-apiserver-868c8dbd57-", Namespace:"calico-apiserver", SelfLink:"", UID:"faa47950-5ad9-4518-9026-0ce0997471e9", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 45, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"868c8dbd57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-ca65cd0ccc", ContainerID:"5c099de3b4c477613d344486fcc56b518263e741c6a5e88e11b123a2ee86ea9b", Pod:"calico-apiserver-868c8dbd57-5bhwh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3a2f95f53a1", MAC:"e2:fa:7f:a9:31:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:46:13.735219 containerd[1714]: 2025-09-12 17:46:13.729 [INFO][5091] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5c099de3b4c477613d344486fcc56b518263e741c6a5e88e11b123a2ee86ea9b" Namespace="calico-apiserver" Pod="calico-apiserver-868c8dbd57-5bhwh" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--5bhwh-eth0" Sep 12 17:46:13.797814 containerd[1714]: time="2025-09-12T17:46:13.797607672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:46:13.797814 containerd[1714]: time="2025-09-12T17:46:13.797686232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:46:13.797814 containerd[1714]: time="2025-09-12T17:46:13.797708432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:46:13.799422 containerd[1714]: time="2025-09-12T17:46:13.797814472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:46:13.828875 systemd[1]: Started cri-containerd-5c099de3b4c477613d344486fcc56b518263e741c6a5e88e11b123a2ee86ea9b.scope - libcontainer container 5c099de3b4c477613d344486fcc56b518263e741c6a5e88e11b123a2ee86ea9b. Sep 12 17:46:13.924814 containerd[1714]: time="2025-09-12T17:46:13.924764099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-868c8dbd57-5bhwh,Uid:faa47950-5ad9-4518-9026-0ce0997471e9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5c099de3b4c477613d344486fcc56b518263e741c6a5e88e11b123a2ee86ea9b\"" Sep 12 17:46:13.984372 systemd-networkd[1576]: calicaa8ae87a22: Gained IPv6LL Sep 12 17:46:13.984652 systemd-networkd[1576]: cali8eb75dc1396: Gained IPv6LL Sep 12 17:46:14.099729 systemd-networkd[1576]: cali15b548155ea: Link UP Sep 12 17:46:14.110852 systemd-networkd[1576]: cali15b548155ea: Gained carrier Sep 12 17:46:14.137979 containerd[1714]: 2025-09-12 17:46:13.907 [INFO][5150] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--ca65cd0ccc-k8s-csi--node--driver--jjztm-eth0 csi-node-driver- calico-system 64c67752-89ee-4f26-b63e-b37e41be4790 986 0 2025-09-12 17:45:46 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-a-ca65cd0ccc csi-node-driver-jjztm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali15b548155ea [] [] }} ContainerID="ed4c973e8b3a9e3413e0f6e2da06b4c9afc03ca940400b8d31dc588f1910975d" Namespace="calico-system" Pod="csi-node-driver-jjztm" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-csi--node--driver--jjztm-" Sep 12 17:46:14.137979 containerd[1714]: 2025-09-12 17:46:13.907 [INFO][5150] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed4c973e8b3a9e3413e0f6e2da06b4c9afc03ca940400b8d31dc588f1910975d" Namespace="calico-system" Pod="csi-node-driver-jjztm" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-csi--node--driver--jjztm-eth0" Sep 12 17:46:14.137979 containerd[1714]: 2025-09-12 17:46:13.972 [INFO][5199] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed4c973e8b3a9e3413e0f6e2da06b4c9afc03ca940400b8d31dc588f1910975d" HandleID="k8s-pod-network.ed4c973e8b3a9e3413e0f6e2da06b4c9afc03ca940400b8d31dc588f1910975d" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-csi--node--driver--jjztm-eth0" Sep 12 17:46:14.137979 containerd[1714]: 2025-09-12 17:46:13.972 [INFO][5199] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ed4c973e8b3a9e3413e0f6e2da06b4c9afc03ca940400b8d31dc588f1910975d" HandleID="k8s-pod-network.ed4c973e8b3a9e3413e0f6e2da06b4c9afc03ca940400b8d31dc588f1910975d" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-csi--node--driver--jjztm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000349bd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-a-ca65cd0ccc", "pod":"csi-node-driver-jjztm", "timestamp":"2025-09-12 17:46:13.97277378 +0000 UTC"}, Hostname:"ci-4081.3.6-a-ca65cd0ccc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:46:14.137979 containerd[1714]: 2025-09-12 17:46:13.972 [INFO][5199] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:46:14.137979 containerd[1714]: 2025-09-12 17:46:13.973 [INFO][5199] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:46:14.137979 containerd[1714]: 2025-09-12 17:46:13.973 [INFO][5199] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-ca65cd0ccc' Sep 12 17:46:14.137979 containerd[1714]: 2025-09-12 17:46:14.005 [INFO][5199] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ed4c973e8b3a9e3413e0f6e2da06b4c9afc03ca940400b8d31dc588f1910975d" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.137979 containerd[1714]: 2025-09-12 17:46:14.020 [INFO][5199] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.137979 containerd[1714]: 2025-09-12 17:46:14.040 [INFO][5199] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.137979 containerd[1714]: 2025-09-12 17:46:14.046 [INFO][5199] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.137979 containerd[1714]: 2025-09-12 17:46:14.050 [INFO][5199] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.137979 containerd[1714]: 2025-09-12 17:46:14.051 [INFO][5199] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.ed4c973e8b3a9e3413e0f6e2da06b4c9afc03ca940400b8d31dc588f1910975d" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.137979 containerd[1714]: 2025-09-12 17:46:14.053 [INFO][5199] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ed4c973e8b3a9e3413e0f6e2da06b4c9afc03ca940400b8d31dc588f1910975d Sep 12 17:46:14.137979 containerd[1714]: 2025-09-12 17:46:14.060 [INFO][5199] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.ed4c973e8b3a9e3413e0f6e2da06b4c9afc03ca940400b8d31dc588f1910975d" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.137979 containerd[1714]: 2025-09-12 17:46:14.074 [INFO][5199] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.21.133/26] block=192.168.21.128/26 handle="k8s-pod-network.ed4c973e8b3a9e3413e0f6e2da06b4c9afc03ca940400b8d31dc588f1910975d" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.137979 containerd[1714]: 2025-09-12 17:46:14.074 [INFO][5199] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.133/26] handle="k8s-pod-network.ed4c973e8b3a9e3413e0f6e2da06b4c9afc03ca940400b8d31dc588f1910975d" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.137979 containerd[1714]: 2025-09-12 17:46:14.074 [INFO][5199] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:46:14.137979 containerd[1714]: 2025-09-12 17:46:14.074 [INFO][5199] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.133/26] IPv6=[] ContainerID="ed4c973e8b3a9e3413e0f6e2da06b4c9afc03ca940400b8d31dc588f1910975d" HandleID="k8s-pod-network.ed4c973e8b3a9e3413e0f6e2da06b4c9afc03ca940400b8d31dc588f1910975d" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-csi--node--driver--jjztm-eth0" Sep 12 17:46:14.139220 containerd[1714]: 2025-09-12 17:46:14.083 [INFO][5150] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed4c973e8b3a9e3413e0f6e2da06b4c9afc03ca940400b8d31dc588f1910975d" Namespace="calico-system" Pod="csi-node-driver-jjztm" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-csi--node--driver--jjztm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--ca65cd0ccc-k8s-csi--node--driver--jjztm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"64c67752-89ee-4f26-b63e-b37e41be4790", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 45, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-ca65cd0ccc", ContainerID:"", Pod:"csi-node-driver-jjztm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.21.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali15b548155ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:46:14.139220 containerd[1714]: 2025-09-12 17:46:14.083 [INFO][5150] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.133/32] ContainerID="ed4c973e8b3a9e3413e0f6e2da06b4c9afc03ca940400b8d31dc588f1910975d" Namespace="calico-system" Pod="csi-node-driver-jjztm" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-csi--node--driver--jjztm-eth0" Sep 12 17:46:14.139220 containerd[1714]: 2025-09-12 17:46:14.084 [INFO][5150] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali15b548155ea ContainerID="ed4c973e8b3a9e3413e0f6e2da06b4c9afc03ca940400b8d31dc588f1910975d" Namespace="calico-system" Pod="csi-node-driver-jjztm" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-csi--node--driver--jjztm-eth0" Sep 12 17:46:14.139220 containerd[1714]: 2025-09-12 17:46:14.112 [INFO][5150] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed4c973e8b3a9e3413e0f6e2da06b4c9afc03ca940400b8d31dc588f1910975d" Namespace="calico-system" Pod="csi-node-driver-jjztm" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-csi--node--driver--jjztm-eth0" Sep 12 17:46:14.139220 containerd[1714]: 2025-09-12 17:46:14.115 [INFO][5150] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed4c973e8b3a9e3413e0f6e2da06b4c9afc03ca940400b8d31dc588f1910975d" Namespace="calico-system" Pod="csi-node-driver-jjztm" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-csi--node--driver--jjztm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--ca65cd0ccc-k8s-csi--node--driver--jjztm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"64c67752-89ee-4f26-b63e-b37e41be4790", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 45, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-ca65cd0ccc", ContainerID:"ed4c973e8b3a9e3413e0f6e2da06b4c9afc03ca940400b8d31dc588f1910975d", Pod:"csi-node-driver-jjztm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.21.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali15b548155ea", MAC:"d2:50:19:76:74:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:46:14.139220 containerd[1714]: 2025-09-12 17:46:14.133 [INFO][5150] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed4c973e8b3a9e3413e0f6e2da06b4c9afc03ca940400b8d31dc588f1910975d" Namespace="calico-system" Pod="csi-node-driver-jjztm" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-csi--node--driver--jjztm-eth0" Sep 12 17:46:14.208269 systemd-networkd[1576]: cali27c0a359e7b: Link UP Sep 12 17:46:14.208936 systemd-networkd[1576]: cali27c0a359e7b: Gained carrier Sep 12 17:46:14.214933 containerd[1714]: time="2025-09-12T17:46:14.213710144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:46:14.214933 containerd[1714]: time="2025-09-12T17:46:14.213770624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:46:14.214933 containerd[1714]: time="2025-09-12T17:46:14.213786864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:46:14.217194 containerd[1714]: time="2025-09-12T17:46:14.215030105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:46:14.254468 containerd[1714]: 2025-09-12 17:46:13.929 [INFO][5169] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--s6vg4-eth0 calico-apiserver-868c8dbd57- calico-apiserver bbd20681-1f6f-410b-806a-7fe01c4c3226 987 0 2025-09-12 17:45:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:868c8dbd57 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-a-ca65cd0ccc calico-apiserver-868c8dbd57-s6vg4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali27c0a359e7b [] [] }} ContainerID="5ccf3d7f1bd3b1750e05760933337c3a80dae8095b11eaddc99c8ad8ec6e12b4" Namespace="calico-apiserver" Pod="calico-apiserver-868c8dbd57-s6vg4" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--s6vg4-" Sep 12 17:46:14.254468 containerd[1714]: 2025-09-12 17:46:13.929 [INFO][5169] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5ccf3d7f1bd3b1750e05760933337c3a80dae8095b11eaddc99c8ad8ec6e12b4" Namespace="calico-apiserver" Pod="calico-apiserver-868c8dbd57-s6vg4" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--s6vg4-eth0" Sep 12 17:46:14.254468 containerd[1714]: 2025-09-12 17:46:14.014 [INFO][5206] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ccf3d7f1bd3b1750e05760933337c3a80dae8095b11eaddc99c8ad8ec6e12b4" HandleID="k8s-pod-network.5ccf3d7f1bd3b1750e05760933337c3a80dae8095b11eaddc99c8ad8ec6e12b4" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--s6vg4-eth0" Sep 12 17:46:14.254468 containerd[1714]: 2025-09-12 17:46:14.014 [INFO][5206] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5ccf3d7f1bd3b1750e05760933337c3a80dae8095b11eaddc99c8ad8ec6e12b4" HandleID="k8s-pod-network.5ccf3d7f1bd3b1750e05760933337c3a80dae8095b11eaddc99c8ad8ec6e12b4" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--s6vg4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b300), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-a-ca65cd0ccc", "pod":"calico-apiserver-868c8dbd57-s6vg4", "timestamp":"2025-09-12 17:46:14.014056295 +0000 UTC"}, Hostname:"ci-4081.3.6-a-ca65cd0ccc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:46:14.254468 containerd[1714]: 2025-09-12 17:46:14.014 [INFO][5206] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:46:14.254468 containerd[1714]: 2025-09-12 17:46:14.076 [INFO][5206] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:46:14.254468 containerd[1714]: 2025-09-12 17:46:14.076 [INFO][5206] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-ca65cd0ccc' Sep 12 17:46:14.254468 containerd[1714]: 2025-09-12 17:46:14.107 [INFO][5206] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5ccf3d7f1bd3b1750e05760933337c3a80dae8095b11eaddc99c8ad8ec6e12b4" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.254468 containerd[1714]: 2025-09-12 17:46:14.124 [INFO][5206] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.254468 containerd[1714]: 2025-09-12 17:46:14.140 [INFO][5206] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.254468 containerd[1714]: 2025-09-12 17:46:14.146 [INFO][5206] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.254468 containerd[1714]: 2025-09-12 17:46:14.152 [INFO][5206] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.254468 containerd[1714]: 2025-09-12 17:46:14.152 [INFO][5206] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.5ccf3d7f1bd3b1750e05760933337c3a80dae8095b11eaddc99c8ad8ec6e12b4" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.254468 containerd[1714]: 2025-09-12 17:46:14.155 [INFO][5206] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5ccf3d7f1bd3b1750e05760933337c3a80dae8095b11eaddc99c8ad8ec6e12b4 Sep 12 17:46:14.254468 containerd[1714]: 2025-09-12 17:46:14.165 [INFO][5206] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.5ccf3d7f1bd3b1750e05760933337c3a80dae8095b11eaddc99c8ad8ec6e12b4" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.254468 containerd[1714]: 2025-09-12 17:46:14.188 [INFO][5206] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.21.134/26] block=192.168.21.128/26 handle="k8s-pod-network.5ccf3d7f1bd3b1750e05760933337c3a80dae8095b11eaddc99c8ad8ec6e12b4" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.254468 containerd[1714]: 2025-09-12 17:46:14.188 [INFO][5206] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.134/26] handle="k8s-pod-network.5ccf3d7f1bd3b1750e05760933337c3a80dae8095b11eaddc99c8ad8ec6e12b4" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.254468 containerd[1714]: 2025-09-12 17:46:14.188 [INFO][5206] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:46:14.254468 containerd[1714]: 2025-09-12 17:46:14.188 [INFO][5206] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.134/26] IPv6=[] ContainerID="5ccf3d7f1bd3b1750e05760933337c3a80dae8095b11eaddc99c8ad8ec6e12b4" HandleID="k8s-pod-network.5ccf3d7f1bd3b1750e05760933337c3a80dae8095b11eaddc99c8ad8ec6e12b4" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--s6vg4-eth0" Sep 12 17:46:14.256404 containerd[1714]: 2025-09-12 17:46:14.205 [INFO][5169] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5ccf3d7f1bd3b1750e05760933337c3a80dae8095b11eaddc99c8ad8ec6e12b4" Namespace="calico-apiserver" Pod="calico-apiserver-868c8dbd57-s6vg4" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--s6vg4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--s6vg4-eth0", GenerateName:"calico-apiserver-868c8dbd57-", Namespace:"calico-apiserver", SelfLink:"", UID:"bbd20681-1f6f-410b-806a-7fe01c4c3226", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 45, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"868c8dbd57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-ca65cd0ccc", ContainerID:"", Pod:"calico-apiserver-868c8dbd57-s6vg4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali27c0a359e7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:46:14.256404 containerd[1714]: 2025-09-12 17:46:14.205 [INFO][5169] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.134/32] ContainerID="5ccf3d7f1bd3b1750e05760933337c3a80dae8095b11eaddc99c8ad8ec6e12b4" Namespace="calico-apiserver" Pod="calico-apiserver-868c8dbd57-s6vg4" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--s6vg4-eth0" Sep 12 17:46:14.256404 containerd[1714]: 2025-09-12 17:46:14.205 [INFO][5169] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali27c0a359e7b ContainerID="5ccf3d7f1bd3b1750e05760933337c3a80dae8095b11eaddc99c8ad8ec6e12b4" Namespace="calico-apiserver" Pod="calico-apiserver-868c8dbd57-s6vg4" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--s6vg4-eth0" Sep 12 17:46:14.256404 containerd[1714]: 2025-09-12 17:46:14.210 [INFO][5169] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5ccf3d7f1bd3b1750e05760933337c3a80dae8095b11eaddc99c8ad8ec6e12b4" Namespace="calico-apiserver" Pod="calico-apiserver-868c8dbd57-s6vg4" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--s6vg4-eth0" Sep 12 17:46:14.256404 containerd[1714]: 2025-09-12 17:46:14.223 [INFO][5169] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5ccf3d7f1bd3b1750e05760933337c3a80dae8095b11eaddc99c8ad8ec6e12b4" Namespace="calico-apiserver" Pod="calico-apiserver-868c8dbd57-s6vg4" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--s6vg4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--s6vg4-eth0", GenerateName:"calico-apiserver-868c8dbd57-", Namespace:"calico-apiserver", SelfLink:"", UID:"bbd20681-1f6f-410b-806a-7fe01c4c3226", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 45, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"868c8dbd57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-ca65cd0ccc", ContainerID:"5ccf3d7f1bd3b1750e05760933337c3a80dae8095b11eaddc99c8ad8ec6e12b4", Pod:"calico-apiserver-868c8dbd57-s6vg4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali27c0a359e7b", MAC:"da:7a:a5:4f:f4:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:46:14.256404 containerd[1714]: 2025-09-12 17:46:14.246 [INFO][5169] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5ccf3d7f1bd3b1750e05760933337c3a80dae8095b11eaddc99c8ad8ec6e12b4" Namespace="calico-apiserver" Pod="calico-apiserver-868c8dbd57-s6vg4" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--s6vg4-eth0" Sep 12 17:46:14.258502 systemd[1]: Started cri-containerd-ed4c973e8b3a9e3413e0f6e2da06b4c9afc03ca940400b8d31dc588f1910975d.scope - libcontainer container ed4c973e8b3a9e3413e0f6e2da06b4c9afc03ca940400b8d31dc588f1910975d. Sep 12 17:46:14.272265 containerd[1714]: time="2025-09-12T17:46:14.272092073Z" level=info msg="StopPodSandbox for \"ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55\"" Sep 12 17:46:14.273188 containerd[1714]: time="2025-09-12T17:46:14.273093074Z" level=info msg="StopPodSandbox for \"e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa\"" Sep 12 17:46:14.312992 containerd[1714]: time="2025-09-12T17:46:14.311995707Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:46:14.320837 containerd[1714]: time="2025-09-12T17:46:14.312557907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:46:14.321440 containerd[1714]: time="2025-09-12T17:46:14.320495634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:46:14.321440 containerd[1714]: time="2025-09-12T17:46:14.320625994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:46:14.344584 systemd[1]: Started cri-containerd-5ccf3d7f1bd3b1750e05760933337c3a80dae8095b11eaddc99c8ad8ec6e12b4.scope - libcontainer container 5ccf3d7f1bd3b1750e05760933337c3a80dae8095b11eaddc99c8ad8ec6e12b4. Sep 12 17:46:14.386257 containerd[1714]: time="2025-09-12T17:46:14.385610249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jjztm,Uid:64c67752-89ee-4f26-b63e-b37e41be4790,Namespace:calico-system,Attempt:1,} returns sandbox id \"ed4c973e8b3a9e3413e0f6e2da06b4c9afc03ca940400b8d31dc588f1910975d\"" Sep 12 17:46:14.405752 systemd[1]: run-netns-cni\x2d0b61ffb2\x2debac\x2ddec7\x2de077\x2df8b92a17b6f8.mount: Deactivated successfully. Sep 12 17:46:14.443784 containerd[1714]: time="2025-09-12T17:46:14.443736058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-868c8dbd57-s6vg4,Uid:bbd20681-1f6f-410b-806a-7fe01c4c3226,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5ccf3d7f1bd3b1750e05760933337c3a80dae8095b11eaddc99c8ad8ec6e12b4\"" Sep 12 17:46:14.526347 containerd[1714]: 2025-09-12 17:46:14.445 [INFO][5294] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" Sep 12 17:46:14.526347 containerd[1714]: 2025-09-12 17:46:14.445 [INFO][5294] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" iface="eth0" netns="/var/run/netns/cni-8e77385a-73ec-10ac-018e-fffe9f0d821c" Sep 12 17:46:14.526347 containerd[1714]: 2025-09-12 17:46:14.446 [INFO][5294] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" iface="eth0" netns="/var/run/netns/cni-8e77385a-73ec-10ac-018e-fffe9f0d821c" Sep 12 17:46:14.526347 containerd[1714]: 2025-09-12 17:46:14.454 [INFO][5294] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" iface="eth0" netns="/var/run/netns/cni-8e77385a-73ec-10ac-018e-fffe9f0d821c" Sep 12 17:46:14.526347 containerd[1714]: 2025-09-12 17:46:14.454 [INFO][5294] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" Sep 12 17:46:14.526347 containerd[1714]: 2025-09-12 17:46:14.454 [INFO][5294] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" Sep 12 17:46:14.526347 containerd[1714]: 2025-09-12 17:46:14.502 [INFO][5348] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" HandleID="k8s-pod-network.ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--kube--controllers--84df6476b7--wbsmk-eth0" Sep 12 17:46:14.526347 containerd[1714]: 2025-09-12 17:46:14.503 [INFO][5348] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:46:14.526347 containerd[1714]: 2025-09-12 17:46:14.503 [INFO][5348] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:46:14.526347 containerd[1714]: 2025-09-12 17:46:14.514 [WARNING][5348] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" HandleID="k8s-pod-network.ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--kube--controllers--84df6476b7--wbsmk-eth0" Sep 12 17:46:14.526347 containerd[1714]: 2025-09-12 17:46:14.514 [INFO][5348] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" HandleID="k8s-pod-network.ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--kube--controllers--84df6476b7--wbsmk-eth0" Sep 12 17:46:14.526347 containerd[1714]: 2025-09-12 17:46:14.516 [INFO][5348] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:46:14.526347 containerd[1714]: 2025-09-12 17:46:14.517 [INFO][5294] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" Sep 12 17:46:14.533468 containerd[1714]: time="2025-09-12T17:46:14.528867690Z" level=info msg="TearDown network for sandbox \"ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55\" successfully" Sep 12 17:46:14.533468 containerd[1714]: time="2025-09-12T17:46:14.528899810Z" level=info msg="StopPodSandbox for \"ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55\" returns successfully" Sep 12 17:46:14.533468 containerd[1714]: time="2025-09-12T17:46:14.532374453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84df6476b7-wbsmk,Uid:cb0990fd-3735-45b3-86b2-04ad783e7243,Namespace:calico-system,Attempt:1,}" Sep 12 17:46:14.533045 systemd[1]: run-netns-cni\x2d8e77385a\x2d73ec\x2d10ac\x2d018e\x2dfffe9f0d821c.mount: Deactivated successfully. Sep 12 17:46:14.567574 containerd[1714]: 2025-09-12 17:46:14.488 [INFO][5282] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" Sep 12 17:46:14.567574 containerd[1714]: 2025-09-12 17:46:14.489 [INFO][5282] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" iface="eth0" netns="/var/run/netns/cni-de48964f-e59e-f713-1f26-d4b77247c559" Sep 12 17:46:14.567574 containerd[1714]: 2025-09-12 17:46:14.489 [INFO][5282] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" iface="eth0" netns="/var/run/netns/cni-de48964f-e59e-f713-1f26-d4b77247c559" Sep 12 17:46:14.567574 containerd[1714]: 2025-09-12 17:46:14.489 [INFO][5282] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" iface="eth0" netns="/var/run/netns/cni-de48964f-e59e-f713-1f26-d4b77247c559" Sep 12 17:46:14.567574 containerd[1714]: 2025-09-12 17:46:14.489 [INFO][5282] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" Sep 12 17:46:14.567574 containerd[1714]: 2025-09-12 17:46:14.490 [INFO][5282] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" Sep 12 17:46:14.567574 containerd[1714]: 2025-09-12 17:46:14.538 [INFO][5355] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" HandleID="k8s-pod-network.e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--x6pz9-eth0" Sep 12 17:46:14.567574 containerd[1714]: 2025-09-12 17:46:14.538 [INFO][5355] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:46:14.567574 containerd[1714]: 2025-09-12 17:46:14.538 [INFO][5355] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:46:14.567574 containerd[1714]: 2025-09-12 17:46:14.557 [WARNING][5355] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" HandleID="k8s-pod-network.e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--x6pz9-eth0" Sep 12 17:46:14.567574 containerd[1714]: 2025-09-12 17:46:14.558 [INFO][5355] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" HandleID="k8s-pod-network.e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--x6pz9-eth0" Sep 12 17:46:14.567574 containerd[1714]: 2025-09-12 17:46:14.561 [INFO][5355] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:46:14.567574 containerd[1714]: 2025-09-12 17:46:14.564 [INFO][5282] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" Sep 12 17:46:14.568172 containerd[1714]: time="2025-09-12T17:46:14.568119323Z" level=info msg="TearDown network for sandbox \"e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa\" successfully" Sep 12 17:46:14.568520 containerd[1714]: time="2025-09-12T17:46:14.568250443Z" level=info msg="StopPodSandbox for \"e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa\" returns successfully" Sep 12 17:46:14.572791 systemd[1]: run-netns-cni\x2dde48964f\x2de59e\x2df713\x2d1f26\x2dd4b77247c559.mount: Deactivated successfully. Sep 12 17:46:14.574345 containerd[1714]: time="2025-09-12T17:46:14.573122647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x6pz9,Uid:acb602f3-ad8a-4d55-aba0-1c4c66a93bb2,Namespace:kube-system,Attempt:1,}" Sep 12 17:46:14.758649 systemd-networkd[1576]: cali231bd9f4ebb: Link UP Sep 12 17:46:14.765883 systemd-networkd[1576]: cali231bd9f4ebb: Gained carrier Sep 12 17:46:14.793534 containerd[1714]: 2025-09-12 17:46:14.646 [INFO][5363] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--ca65cd0ccc-k8s-calico--kube--controllers--84df6476b7--wbsmk-eth0 calico-kube-controllers-84df6476b7- calico-system cb0990fd-3735-45b3-86b2-04ad783e7243 1010 0 2025-09-12 17:45:46 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:84df6476b7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-a-ca65cd0ccc calico-kube-controllers-84df6476b7-wbsmk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali231bd9f4ebb [] [] }} ContainerID="059cb8cae07a39ed2f0e74e8f0aaf856c4b539ee0049600e2cabd1433685421d" Namespace="calico-system" Pod="calico-kube-controllers-84df6476b7-wbsmk" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--kube--controllers--84df6476b7--wbsmk-" Sep 12 17:46:14.793534 containerd[1714]: 2025-09-12 17:46:14.646 [INFO][5363] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="059cb8cae07a39ed2f0e74e8f0aaf856c4b539ee0049600e2cabd1433685421d" Namespace="calico-system" Pod="calico-kube-controllers-84df6476b7-wbsmk" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--kube--controllers--84df6476b7--wbsmk-eth0" Sep 12 17:46:14.793534 containerd[1714]: 2025-09-12 17:46:14.680 [INFO][5387] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="059cb8cae07a39ed2f0e74e8f0aaf856c4b539ee0049600e2cabd1433685421d" HandleID="k8s-pod-network.059cb8cae07a39ed2f0e74e8f0aaf856c4b539ee0049600e2cabd1433685421d" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--kube--controllers--84df6476b7--wbsmk-eth0" Sep 12 17:46:14.793534 containerd[1714]: 2025-09-12 17:46:14.680 [INFO][5387] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="059cb8cae07a39ed2f0e74e8f0aaf856c4b539ee0049600e2cabd1433685421d" HandleID="k8s-pod-network.059cb8cae07a39ed2f0e74e8f0aaf856c4b539ee0049600e2cabd1433685421d" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--kube--controllers--84df6476b7--wbsmk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d39c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-a-ca65cd0ccc", "pod":"calico-kube-controllers-84df6476b7-wbsmk", "timestamp":"2025-09-12 17:46:14.680646538 +0000 UTC"}, Hostname:"ci-4081.3.6-a-ca65cd0ccc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:46:14.793534 containerd[1714]: 2025-09-12 17:46:14.681 [INFO][5387] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:46:14.793534 containerd[1714]: 2025-09-12 17:46:14.681 [INFO][5387] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:46:14.793534 containerd[1714]: 2025-09-12 17:46:14.681 [INFO][5387] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-ca65cd0ccc' Sep 12 17:46:14.793534 containerd[1714]: 2025-09-12 17:46:14.692 [INFO][5387] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.059cb8cae07a39ed2f0e74e8f0aaf856c4b539ee0049600e2cabd1433685421d" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.793534 containerd[1714]: 2025-09-12 17:46:14.700 [INFO][5387] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.793534 containerd[1714]: 2025-09-12 17:46:14.706 [INFO][5387] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.793534 containerd[1714]: 2025-09-12 17:46:14.711 [INFO][5387] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.793534 containerd[1714]: 2025-09-12 17:46:14.715 [INFO][5387] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.793534 containerd[1714]: 2025-09-12 17:46:14.715 [INFO][5387] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.059cb8cae07a39ed2f0e74e8f0aaf856c4b539ee0049600e2cabd1433685421d" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.793534 containerd[1714]: 2025-09-12 17:46:14.718 [INFO][5387] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.059cb8cae07a39ed2f0e74e8f0aaf856c4b539ee0049600e2cabd1433685421d Sep 12 17:46:14.793534 containerd[1714]: 2025-09-12 17:46:14.728 [INFO][5387] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.059cb8cae07a39ed2f0e74e8f0aaf856c4b539ee0049600e2cabd1433685421d" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.793534 containerd[1714]: 2025-09-12 17:46:14.743 [INFO][5387] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.21.135/26] block=192.168.21.128/26 handle="k8s-pod-network.059cb8cae07a39ed2f0e74e8f0aaf856c4b539ee0049600e2cabd1433685421d" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.793534 containerd[1714]: 2025-09-12 17:46:14.743 [INFO][5387] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.135/26] handle="k8s-pod-network.059cb8cae07a39ed2f0e74e8f0aaf856c4b539ee0049600e2cabd1433685421d" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.793534 containerd[1714]: 2025-09-12 17:46:14.743 [INFO][5387] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:46:14.793534 containerd[1714]: 2025-09-12 17:46:14.743 [INFO][5387] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.135/26] IPv6=[] ContainerID="059cb8cae07a39ed2f0e74e8f0aaf856c4b539ee0049600e2cabd1433685421d" HandleID="k8s-pod-network.059cb8cae07a39ed2f0e74e8f0aaf856c4b539ee0049600e2cabd1433685421d" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--kube--controllers--84df6476b7--wbsmk-eth0" Sep 12 17:46:14.795042 containerd[1714]: 2025-09-12 17:46:14.749 [INFO][5363] cni-plugin/k8s.go 418: Populated endpoint ContainerID="059cb8cae07a39ed2f0e74e8f0aaf856c4b539ee0049600e2cabd1433685421d" Namespace="calico-system" Pod="calico-kube-controllers-84df6476b7-wbsmk" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--kube--controllers--84df6476b7--wbsmk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--ca65cd0ccc-k8s-calico--kube--controllers--84df6476b7--wbsmk-eth0", GenerateName:"calico-kube-controllers-84df6476b7-", Namespace:"calico-system", SelfLink:"", UID:"cb0990fd-3735-45b3-86b2-04ad783e7243", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 45, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84df6476b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-ca65cd0ccc", ContainerID:"", Pod:"calico-kube-controllers-84df6476b7-wbsmk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.21.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali231bd9f4ebb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:46:14.795042 containerd[1714]: 2025-09-12 17:46:14.749 [INFO][5363] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.135/32] ContainerID="059cb8cae07a39ed2f0e74e8f0aaf856c4b539ee0049600e2cabd1433685421d" Namespace="calico-system" Pod="calico-kube-controllers-84df6476b7-wbsmk" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--kube--controllers--84df6476b7--wbsmk-eth0" Sep 12 17:46:14.795042 containerd[1714]: 2025-09-12 17:46:14.749 [INFO][5363] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali231bd9f4ebb ContainerID="059cb8cae07a39ed2f0e74e8f0aaf856c4b539ee0049600e2cabd1433685421d" Namespace="calico-system" Pod="calico-kube-controllers-84df6476b7-wbsmk" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--kube--controllers--84df6476b7--wbsmk-eth0" Sep 12 17:46:14.795042 containerd[1714]: 2025-09-12 17:46:14.767 [INFO][5363] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="059cb8cae07a39ed2f0e74e8f0aaf856c4b539ee0049600e2cabd1433685421d" Namespace="calico-system" Pod="calico-kube-controllers-84df6476b7-wbsmk" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--kube--controllers--84df6476b7--wbsmk-eth0" Sep 12 17:46:14.795042 containerd[1714]: 2025-09-12 17:46:14.770 [INFO][5363] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="059cb8cae07a39ed2f0e74e8f0aaf856c4b539ee0049600e2cabd1433685421d" Namespace="calico-system" Pod="calico-kube-controllers-84df6476b7-wbsmk" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--kube--controllers--84df6476b7--wbsmk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--ca65cd0ccc-k8s-calico--kube--controllers--84df6476b7--wbsmk-eth0", GenerateName:"calico-kube-controllers-84df6476b7-", Namespace:"calico-system", SelfLink:"", UID:"cb0990fd-3735-45b3-86b2-04ad783e7243", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 45, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84df6476b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-ca65cd0ccc", ContainerID:"059cb8cae07a39ed2f0e74e8f0aaf856c4b539ee0049600e2cabd1433685421d", Pod:"calico-kube-controllers-84df6476b7-wbsmk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.21.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali231bd9f4ebb", MAC:"ea:70:94:16:06:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:46:14.795042 containerd[1714]: 2025-09-12 17:46:14.791 [INFO][5363] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="059cb8cae07a39ed2f0e74e8f0aaf856c4b539ee0049600e2cabd1433685421d" Namespace="calico-system" Pod="calico-kube-controllers-84df6476b7-wbsmk" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--kube--controllers--84df6476b7--wbsmk-eth0" Sep 12 17:46:14.853825 containerd[1714]: time="2025-09-12T17:46:14.852558483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:46:14.853825 containerd[1714]: time="2025-09-12T17:46:14.852622083Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:46:14.853825 containerd[1714]: time="2025-09-12T17:46:14.852650243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:46:14.853825 containerd[1714]: time="2025-09-12T17:46:14.852763084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:46:14.883457 systemd[1]: Started cri-containerd-059cb8cae07a39ed2f0e74e8f0aaf856c4b539ee0049600e2cabd1433685421d.scope - libcontainer container 059cb8cae07a39ed2f0e74e8f0aaf856c4b539ee0049600e2cabd1433685421d. Sep 12 17:46:14.903854 systemd-networkd[1576]: cali02120edfea8: Link UP Sep 12 17:46:14.907156 systemd-networkd[1576]: cali02120edfea8: Gained carrier Sep 12 17:46:14.929200 containerd[1714]: 2025-09-12 17:46:14.732 [INFO][5374] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--x6pz9-eth0 coredns-674b8bbfcf- kube-system acb602f3-ad8a-4d55-aba0-1c4c66a93bb2 1012 0 2025-09-12 17:45:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-a-ca65cd0ccc coredns-674b8bbfcf-x6pz9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali02120edfea8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fea34b80046dc8fbf477fb05fd660bd212c5d87cb53dd9c2cc2615a7bafce9b2" Namespace="kube-system" Pod="coredns-674b8bbfcf-x6pz9" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--x6pz9-" Sep 12 17:46:14.929200 containerd[1714]: 2025-09-12 17:46:14.732 [INFO][5374] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fea34b80046dc8fbf477fb05fd660bd212c5d87cb53dd9c2cc2615a7bafce9b2" Namespace="kube-system" Pod="coredns-674b8bbfcf-x6pz9" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--x6pz9-eth0" Sep 12 17:46:14.929200 containerd[1714]: 2025-09-12 17:46:14.810 [INFO][5396] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fea34b80046dc8fbf477fb05fd660bd212c5d87cb53dd9c2cc2615a7bafce9b2" HandleID="k8s-pod-network.fea34b80046dc8fbf477fb05fd660bd212c5d87cb53dd9c2cc2615a7bafce9b2" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--x6pz9-eth0" Sep 12 17:46:14.929200 containerd[1714]: 2025-09-12 17:46:14.810 [INFO][5396] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fea34b80046dc8fbf477fb05fd660bd212c5d87cb53dd9c2cc2615a7bafce9b2" HandleID="k8s-pod-network.fea34b80046dc8fbf477fb05fd660bd212c5d87cb53dd9c2cc2615a7bafce9b2" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--x6pz9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035faa0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-a-ca65cd0ccc", "pod":"coredns-674b8bbfcf-x6pz9", "timestamp":"2025-09-12 17:46:14.809887807 +0000 UTC"}, Hostname:"ci-4081.3.6-a-ca65cd0ccc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:46:14.929200 containerd[1714]: 2025-09-12 17:46:14.810 [INFO][5396] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:46:14.929200 containerd[1714]: 2025-09-12 17:46:14.810 [INFO][5396] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:46:14.929200 containerd[1714]: 2025-09-12 17:46:14.810 [INFO][5396] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-ca65cd0ccc' Sep 12 17:46:14.929200 containerd[1714]: 2025-09-12 17:46:14.830 [INFO][5396] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fea34b80046dc8fbf477fb05fd660bd212c5d87cb53dd9c2cc2615a7bafce9b2" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.929200 containerd[1714]: 2025-09-12 17:46:14.844 [INFO][5396] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.929200 containerd[1714]: 2025-09-12 17:46:14.852 [INFO][5396] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.929200 containerd[1714]: 2025-09-12 17:46:14.856 [INFO][5396] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.929200 containerd[1714]: 2025-09-12 17:46:14.861 [INFO][5396] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.929200 containerd[1714]: 2025-09-12 17:46:14.861 [INFO][5396] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.fea34b80046dc8fbf477fb05fd660bd212c5d87cb53dd9c2cc2615a7bafce9b2" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.929200 containerd[1714]: 2025-09-12 17:46:14.868 [INFO][5396] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fea34b80046dc8fbf477fb05fd660bd212c5d87cb53dd9c2cc2615a7bafce9b2 Sep 12 17:46:14.929200 containerd[1714]: 2025-09-12 17:46:14.878 [INFO][5396] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.fea34b80046dc8fbf477fb05fd660bd212c5d87cb53dd9c2cc2615a7bafce9b2" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.929200 containerd[1714]: 2025-09-12 17:46:14.892 [INFO][5396] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.21.136/26] block=192.168.21.128/26 handle="k8s-pod-network.fea34b80046dc8fbf477fb05fd660bd212c5d87cb53dd9c2cc2615a7bafce9b2" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.929200 containerd[1714]: 2025-09-12 17:46:14.892 [INFO][5396] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.136/26] handle="k8s-pod-network.fea34b80046dc8fbf477fb05fd660bd212c5d87cb53dd9c2cc2615a7bafce9b2" host="ci-4081.3.6-a-ca65cd0ccc" Sep 12 17:46:14.929200 containerd[1714]: 2025-09-12 17:46:14.892 [INFO][5396] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:46:14.929200 containerd[1714]: 2025-09-12 17:46:14.892 [INFO][5396] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.136/26] IPv6=[] ContainerID="fea34b80046dc8fbf477fb05fd660bd212c5d87cb53dd9c2cc2615a7bafce9b2" HandleID="k8s-pod-network.fea34b80046dc8fbf477fb05fd660bd212c5d87cb53dd9c2cc2615a7bafce9b2" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--x6pz9-eth0" Sep 12 17:46:14.930471 containerd[1714]: 2025-09-12 17:46:14.896 [INFO][5374] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fea34b80046dc8fbf477fb05fd660bd212c5d87cb53dd9c2cc2615a7bafce9b2" Namespace="kube-system" Pod="coredns-674b8bbfcf-x6pz9" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--x6pz9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--x6pz9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"acb602f3-ad8a-4d55-aba0-1c4c66a93bb2", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 45, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-ca65cd0ccc", ContainerID:"", Pod:"coredns-674b8bbfcf-x6pz9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02120edfea8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:46:14.930471 containerd[1714]: 2025-09-12 17:46:14.898 [INFO][5374] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.136/32] ContainerID="fea34b80046dc8fbf477fb05fd660bd212c5d87cb53dd9c2cc2615a7bafce9b2" Namespace="kube-system" Pod="coredns-674b8bbfcf-x6pz9" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--x6pz9-eth0" Sep 12 17:46:14.930471 containerd[1714]: 2025-09-12 17:46:14.898 [INFO][5374] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali02120edfea8 ContainerID="fea34b80046dc8fbf477fb05fd660bd212c5d87cb53dd9c2cc2615a7bafce9b2" Namespace="kube-system" Pod="coredns-674b8bbfcf-x6pz9" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--x6pz9-eth0" Sep 12 17:46:14.930471 containerd[1714]: 2025-09-12 17:46:14.908 [INFO][5374] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fea34b80046dc8fbf477fb05fd660bd212c5d87cb53dd9c2cc2615a7bafce9b2" Namespace="kube-system" Pod="coredns-674b8bbfcf-x6pz9" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--x6pz9-eth0" Sep 12 17:46:14.930471 containerd[1714]: 2025-09-12 17:46:14.909 [INFO][5374] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fea34b80046dc8fbf477fb05fd660bd212c5d87cb53dd9c2cc2615a7bafce9b2" Namespace="kube-system" Pod="coredns-674b8bbfcf-x6pz9" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--x6pz9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--x6pz9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"acb602f3-ad8a-4d55-aba0-1c4c66a93bb2", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 45, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-ca65cd0ccc", ContainerID:"fea34b80046dc8fbf477fb05fd660bd212c5d87cb53dd9c2cc2615a7bafce9b2", Pod:"coredns-674b8bbfcf-x6pz9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02120edfea8", MAC:"86:25:28:5c:19:d7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:46:14.930471 containerd[1714]: 2025-09-12 17:46:14.926 [INFO][5374] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fea34b80046dc8fbf477fb05fd660bd212c5d87cb53dd9c2cc2615a7bafce9b2" Namespace="kube-system" Pod="coredns-674b8bbfcf-x6pz9" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--x6pz9-eth0" Sep 12 17:46:14.958297 containerd[1714]: time="2025-09-12T17:46:14.958102453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84df6476b7-wbsmk,Uid:cb0990fd-3735-45b3-86b2-04ad783e7243,Namespace:calico-system,Attempt:1,} returns sandbox id \"059cb8cae07a39ed2f0e74e8f0aaf856c4b539ee0049600e2cabd1433685421d\"" Sep 12 17:46:14.966175 containerd[1714]: time="2025-09-12T17:46:14.966117219Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:14.972394 containerd[1714]: time="2025-09-12T17:46:14.972293305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:46:14.972592 containerd[1714]: time="2025-09-12T17:46:14.972406305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:46:14.972592 containerd[1714]: time="2025-09-12T17:46:14.972433825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:46:14.972657 containerd[1714]: time="2025-09-12T17:46:14.972579505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:46:14.974309 containerd[1714]: time="2025-09-12T17:46:14.973424865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 12 17:46:14.978058 containerd[1714]: time="2025-09-12T17:46:14.978013389Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:14.984419 containerd[1714]: time="2025-09-12T17:46:14.984379755Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:14.985076 containerd[1714]: time="2025-09-12T17:46:14.985037475Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 3.505128522s" Sep 12 17:46:14.985132 containerd[1714]: time="2025-09-12T17:46:14.985075155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 12 17:46:14.988416 containerd[1714]: time="2025-09-12T17:46:14.988376038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:46:14.996645 systemd[1]: Started cri-containerd-fea34b80046dc8fbf477fb05fd660bd212c5d87cb53dd9c2cc2615a7bafce9b2.scope - libcontainer container fea34b80046dc8fbf477fb05fd660bd212c5d87cb53dd9c2cc2615a7bafce9b2. Sep 12 17:46:15.005631 containerd[1714]: time="2025-09-12T17:46:15.005588133Z" level=info msg="CreateContainer within sandbox \"d027aa289f6becf7bc8e638fd3dde937c061904bcbfad4f2bc33bcc4766f757a\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:46:15.046897 containerd[1714]: time="2025-09-12T17:46:15.046773767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x6pz9,Uid:acb602f3-ad8a-4d55-aba0-1c4c66a93bb2,Namespace:kube-system,Attempt:1,} returns sandbox id \"fea34b80046dc8fbf477fb05fd660bd212c5d87cb53dd9c2cc2615a7bafce9b2\"" Sep 12 17:46:15.059903 containerd[1714]: time="2025-09-12T17:46:15.059706818Z" level=info msg="CreateContainer within sandbox \"fea34b80046dc8fbf477fb05fd660bd212c5d87cb53dd9c2cc2615a7bafce9b2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:46:15.083991 containerd[1714]: time="2025-09-12T17:46:15.083939799Z" level=info msg="CreateContainer within sandbox \"d027aa289f6becf7bc8e638fd3dde937c061904bcbfad4f2bc33bcc4766f757a\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"976c2b027b8c6636982941cf5baee8d060e74062fdca34fe3184456eac2dc236\"" Sep 12 17:46:15.085699 containerd[1714]: time="2025-09-12T17:46:15.085657800Z" level=info msg="StartContainer for \"976c2b027b8c6636982941cf5baee8d060e74062fdca34fe3184456eac2dc236\"" Sep 12 17:46:15.112439 systemd[1]: Started cri-containerd-976c2b027b8c6636982941cf5baee8d060e74062fdca34fe3184456eac2dc236.scope - libcontainer container 976c2b027b8c6636982941cf5baee8d060e74062fdca34fe3184456eac2dc236. Sep 12 17:46:15.125244 containerd[1714]: time="2025-09-12T17:46:15.125178834Z" level=info msg="CreateContainer within sandbox \"fea34b80046dc8fbf477fb05fd660bd212c5d87cb53dd9c2cc2615a7bafce9b2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"777f49f8c1beb3faaeb613ba157aaac3f94bfe7c5cd3fb51ddadd29efd71e714\"" Sep 12 17:46:15.126104 containerd[1714]: time="2025-09-12T17:46:15.126040274Z" level=info msg="StartContainer for \"777f49f8c1beb3faaeb613ba157aaac3f94bfe7c5cd3fb51ddadd29efd71e714\"" Sep 12 17:46:15.161641 systemd[1]: Started cri-containerd-777f49f8c1beb3faaeb613ba157aaac3f94bfe7c5cd3fb51ddadd29efd71e714.scope - libcontainer container 777f49f8c1beb3faaeb613ba157aaac3f94bfe7c5cd3fb51ddadd29efd71e714. Sep 12 17:46:15.173725 containerd[1714]: time="2025-09-12T17:46:15.173663115Z" level=info msg="StartContainer for \"976c2b027b8c6636982941cf5baee8d060e74062fdca34fe3184456eac2dc236\" returns successfully" Sep 12 17:46:15.207126 containerd[1714]: time="2025-09-12T17:46:15.207062183Z" level=info msg="StartContainer for \"777f49f8c1beb3faaeb613ba157aaac3f94bfe7c5cd3fb51ddadd29efd71e714\" returns successfully" Sep 12 17:46:15.403276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4284565864.mount: Deactivated successfully. Sep 12 17:46:15.455534 systemd-networkd[1576]: cali27c0a359e7b: Gained IPv6LL Sep 12 17:46:15.455811 systemd-networkd[1576]: cali3a2f95f53a1: Gained IPv6LL Sep 12 17:46:15.594645 kubelet[3184]: I0912 17:46:15.594580 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-x6pz9" podStartSLOduration=47.59456319 podStartE2EDuration="47.59456319s" podCreationTimestamp="2025-09-12 17:45:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:46:15.59377315 +0000 UTC m=+53.449401657" watchObservedRunningTime="2025-09-12 17:46:15.59456319 +0000 UTC m=+53.450191617" Sep 12 17:46:15.595378 kubelet[3184]: I0912 17:46:15.594797 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-55964d555f-9vhf6" podStartSLOduration=1.8697425970000001 podStartE2EDuration="8.594791751s" podCreationTimestamp="2025-09-12 17:46:07 +0000 UTC" firstStartedPulling="2025-09-12 17:46:08.263142644 +0000 UTC m=+46.118771111" lastFinishedPulling="2025-09-12 17:46:14.988191758 +0000 UTC m=+52.843820265" observedRunningTime="2025-09-12 17:46:15.566938527 +0000 UTC m=+53.422566994" watchObservedRunningTime="2025-09-12 17:46:15.594791751 +0000 UTC m=+53.450420178" Sep 12 17:46:15.647503 systemd-networkd[1576]: cali15b548155ea: Gained IPv6LL Sep 12 17:46:16.543507 systemd-networkd[1576]: cali231bd9f4ebb: Gained IPv6LL Sep 12 17:46:16.863473 systemd-networkd[1576]: cali02120edfea8: Gained IPv6LL Sep 12 17:46:18.816510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4134764791.mount: Deactivated successfully. Sep 12 17:46:19.254012 containerd[1714]: time="2025-09-12T17:46:19.253883709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:19.258736 containerd[1714]: time="2025-09-12T17:46:19.258692153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 12 17:46:19.262342 containerd[1714]: time="2025-09-12T17:46:19.262280636Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:19.268015 containerd[1714]: time="2025-09-12T17:46:19.267952001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:19.269148 containerd[1714]: time="2025-09-12T17:46:19.268712242Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 4.279656243s" Sep 12 17:46:19.269148 containerd[1714]: time="2025-09-12T17:46:19.268754042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 12 17:46:19.270808 containerd[1714]: time="2025-09-12T17:46:19.270126083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:46:19.281445 containerd[1714]: time="2025-09-12T17:46:19.281392532Z" level=info msg="CreateContainer within sandbox \"58d382360a2f9dea4c3e2cb5687701cf51ec343d8ff800f1d42bb97cca156fb8\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:46:19.328124 containerd[1714]: time="2025-09-12T17:46:19.327316450Z" level=info msg="CreateContainer within sandbox \"58d382360a2f9dea4c3e2cb5687701cf51ec343d8ff800f1d42bb97cca156fb8\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"38446f861cfe5447a85f370ac4eaf0b70bcc324956dece3e2500e7be31d07e39\"" Sep 12 17:46:19.329559 containerd[1714]: time="2025-09-12T17:46:19.328471691Z" level=info msg="StartContainer for \"38446f861cfe5447a85f370ac4eaf0b70bcc324956dece3e2500e7be31d07e39\"" Sep 12 17:46:19.381853 systemd[1]: Started cri-containerd-38446f861cfe5447a85f370ac4eaf0b70bcc324956dece3e2500e7be31d07e39.scope - libcontainer container 38446f861cfe5447a85f370ac4eaf0b70bcc324956dece3e2500e7be31d07e39. Sep 12 17:46:19.433665 containerd[1714]: time="2025-09-12T17:46:19.433456537Z" level=info msg="StartContainer for \"38446f861cfe5447a85f370ac4eaf0b70bcc324956dece3e2500e7be31d07e39\" returns successfully" Sep 12 17:46:19.470852 systemd[1]: run-containerd-runc-k8s.io-38446f861cfe5447a85f370ac4eaf0b70bcc324956dece3e2500e7be31d07e39-runc.9Veb4K.mount: Deactivated successfully. Sep 12 17:46:19.586563 kubelet[3184]: I0912 17:46:19.585797 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-slnk5" podStartSLOduration=28.214321333 podStartE2EDuration="34.585778663s" podCreationTimestamp="2025-09-12 17:45:45 +0000 UTC" firstStartedPulling="2025-09-12 17:46:12.898270712 +0000 UTC m=+50.753899139" lastFinishedPulling="2025-09-12 17:46:19.269728002 +0000 UTC m=+57.125356469" observedRunningTime="2025-09-12 17:46:19.585751623 +0000 UTC m=+57.441380090" watchObservedRunningTime="2025-09-12 17:46:19.585778663 +0000 UTC m=+57.441407130" Sep 12 17:46:22.182570 containerd[1714]: time="2025-09-12T17:46:22.182386600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:22.189380 containerd[1714]: time="2025-09-12T17:46:22.189342326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 12 17:46:22.189897 containerd[1714]: time="2025-09-12T17:46:22.189857046Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:22.200940 containerd[1714]: time="2025-09-12T17:46:22.200891535Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:22.204247 containerd[1714]: time="2025-09-12T17:46:22.201399616Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 2.931238733s" Sep 12 17:46:22.204247 containerd[1714]: time="2025-09-12T17:46:22.201434056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 17:46:22.204247 containerd[1714]: time="2025-09-12T17:46:22.203294377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:46:22.299414 containerd[1714]: time="2025-09-12T17:46:22.299353936Z" level=info msg="StopPodSandbox for \"157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3\"" Sep 12 17:46:22.529700 containerd[1714]: 2025-09-12 17:46:22.349 [WARNING][5729] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--kbs2p-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fcbac6f2-0b6a-4a25-b5bf-41bd3c3b4f83", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 45, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-ca65cd0ccc", ContainerID:"8b0ff27af033a301b84729dcd8d8def567049183c52fb5d65db01073bcb60a9d", Pod:"coredns-674b8bbfcf-kbs2p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicaa8ae87a22", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:46:22.529700 containerd[1714]: 2025-09-12 17:46:22.468 [INFO][5729] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" Sep 12 17:46:22.529700 containerd[1714]: 2025-09-12 17:46:22.468 [INFO][5729] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" iface="eth0" netns="" Sep 12 17:46:22.529700 containerd[1714]: 2025-09-12 17:46:22.468 [INFO][5729] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" Sep 12 17:46:22.529700 containerd[1714]: 2025-09-12 17:46:22.468 [INFO][5729] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" Sep 12 17:46:22.529700 containerd[1714]: 2025-09-12 17:46:22.503 [INFO][5736] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" HandleID="k8s-pod-network.157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--kbs2p-eth0" Sep 12 17:46:22.529700 containerd[1714]: 2025-09-12 17:46:22.503 [INFO][5736] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:46:22.529700 containerd[1714]: 2025-09-12 17:46:22.503 [INFO][5736] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:46:22.529700 containerd[1714]: 2025-09-12 17:46:22.520 [WARNING][5736] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" HandleID="k8s-pod-network.157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--kbs2p-eth0" Sep 12 17:46:22.529700 containerd[1714]: 2025-09-12 17:46:22.521 [INFO][5736] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" HandleID="k8s-pod-network.157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--kbs2p-eth0" Sep 12 17:46:22.529700 containerd[1714]: 2025-09-12 17:46:22.524 [INFO][5736] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:46:22.529700 containerd[1714]: 2025-09-12 17:46:22.527 [INFO][5729] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" Sep 12 17:46:22.530482 containerd[1714]: time="2025-09-12T17:46:22.529748006Z" level=info msg="TearDown network for sandbox \"157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3\" successfully" Sep 12 17:46:22.530482 containerd[1714]: time="2025-09-12T17:46:22.529774926Z" level=info msg="StopPodSandbox for \"157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3\" returns successfully" Sep 12 17:46:22.530482 containerd[1714]: time="2025-09-12T17:46:22.530425286Z" level=info msg="RemovePodSandbox for \"157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3\"" Sep 12 17:46:22.539431 containerd[1714]: time="2025-09-12T17:46:22.539382254Z" level=info msg="Forcibly stopping sandbox \"157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3\"" Sep 12 17:46:22.578220 containerd[1714]: time="2025-09-12T17:46:22.578109806Z" level=info msg="CreateContainer within sandbox \"5c099de3b4c477613d344486fcc56b518263e741c6a5e88e11b123a2ee86ea9b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:46:22.614554 containerd[1714]: 2025-09-12 17:46:22.574 [WARNING][5752] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--kbs2p-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fcbac6f2-0b6a-4a25-b5bf-41bd3c3b4f83", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 45, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-ca65cd0ccc", ContainerID:"8b0ff27af033a301b84729dcd8d8def567049183c52fb5d65db01073bcb60a9d", Pod:"coredns-674b8bbfcf-kbs2p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicaa8ae87a22", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:46:22.614554 containerd[1714]: 2025-09-12 17:46:22.574 [INFO][5752] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" Sep 12 17:46:22.614554 containerd[1714]: 2025-09-12 17:46:22.574 [INFO][5752] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" iface="eth0" netns="" Sep 12 17:46:22.614554 containerd[1714]: 2025-09-12 17:46:22.574 [INFO][5752] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" Sep 12 17:46:22.614554 containerd[1714]: 2025-09-12 17:46:22.574 [INFO][5752] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" Sep 12 17:46:22.614554 containerd[1714]: 2025-09-12 17:46:22.600 [INFO][5759] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" HandleID="k8s-pod-network.157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--kbs2p-eth0" Sep 12 17:46:22.614554 containerd[1714]: 2025-09-12 17:46:22.600 [INFO][5759] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:46:22.614554 containerd[1714]: 2025-09-12 17:46:22.600 [INFO][5759] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:46:22.614554 containerd[1714]: 2025-09-12 17:46:22.609 [WARNING][5759] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" HandleID="k8s-pod-network.157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--kbs2p-eth0" Sep 12 17:46:22.614554 containerd[1714]: 2025-09-12 17:46:22.609 [INFO][5759] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" HandleID="k8s-pod-network.157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--kbs2p-eth0" Sep 12 17:46:22.614554 containerd[1714]: 2025-09-12 17:46:22.611 [INFO][5759] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:46:22.614554 containerd[1714]: 2025-09-12 17:46:22.613 [INFO][5752] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3" Sep 12 17:46:22.614967 containerd[1714]: time="2025-09-12T17:46:22.614614196Z" level=info msg="TearDown network for sandbox \"157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3\" successfully" Sep 12 17:46:22.689994 containerd[1714]: time="2025-09-12T17:46:22.689939418Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:46:22.690127 containerd[1714]: time="2025-09-12T17:46:22.690037698Z" level=info msg="RemovePodSandbox \"157d360263fb2c7af3aa7f9f8815c289dd33493d0095a436f60ec6ac662b0fa3\" returns successfully" Sep 12 17:46:22.690941 containerd[1714]: time="2025-09-12T17:46:22.690674418Z" level=info msg="StopPodSandbox for \"4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46\"" Sep 12 17:46:22.717586 containerd[1714]: time="2025-09-12T17:46:22.717539600Z" level=info msg="CreateContainer within sandbox \"5c099de3b4c477613d344486fcc56b518263e741c6a5e88e11b123a2ee86ea9b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8887b789f0de2b5f0bb790b2bd61b558c8697f39293784ae9859ac8ca8dd849d\"" Sep 12 17:46:22.718848 containerd[1714]: time="2025-09-12T17:46:22.718818841Z" level=info msg="StartContainer for \"8887b789f0de2b5f0bb790b2bd61b558c8697f39293784ae9859ac8ca8dd849d\"" Sep 12 17:46:22.778743 containerd[1714]: 2025-09-12 17:46:22.734 [WARNING][5773] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--5bhwh-eth0", GenerateName:"calico-apiserver-868c8dbd57-", Namespace:"calico-apiserver", SelfLink:"", UID:"faa47950-5ad9-4518-9026-0ce0997471e9", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 45, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"868c8dbd57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-ca65cd0ccc", ContainerID:"5c099de3b4c477613d344486fcc56b518263e741c6a5e88e11b123a2ee86ea9b", Pod:"calico-apiserver-868c8dbd57-5bhwh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3a2f95f53a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:46:22.778743 containerd[1714]: 2025-09-12 17:46:22.735 [INFO][5773] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" Sep 12 17:46:22.778743 containerd[1714]: 2025-09-12 17:46:22.735 [INFO][5773] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" iface="eth0" netns="" Sep 12 17:46:22.778743 containerd[1714]: 2025-09-12 17:46:22.735 [INFO][5773] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" Sep 12 17:46:22.778743 containerd[1714]: 2025-09-12 17:46:22.735 [INFO][5773] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" Sep 12 17:46:22.778743 containerd[1714]: 2025-09-12 17:46:22.756 [INFO][5781] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" HandleID="k8s-pod-network.4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--5bhwh-eth0" Sep 12 17:46:22.778743 containerd[1714]: 2025-09-12 17:46:22.756 [INFO][5781] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:46:22.778743 containerd[1714]: 2025-09-12 17:46:22.756 [INFO][5781] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:46:22.778743 containerd[1714]: 2025-09-12 17:46:22.766 [WARNING][5781] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" HandleID="k8s-pod-network.4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--5bhwh-eth0" Sep 12 17:46:22.778743 containerd[1714]: 2025-09-12 17:46:22.766 [INFO][5781] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" HandleID="k8s-pod-network.4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--5bhwh-eth0" Sep 12 17:46:22.778743 containerd[1714]: 2025-09-12 17:46:22.769 [INFO][5781] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:46:22.778743 containerd[1714]: 2025-09-12 17:46:22.775 [INFO][5773] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" Sep 12 17:46:22.779353 containerd[1714]: time="2025-09-12T17:46:22.778799411Z" level=info msg="TearDown network for sandbox \"4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46\" successfully" Sep 12 17:46:22.779353 containerd[1714]: time="2025-09-12T17:46:22.778922051Z" level=info msg="StopPodSandbox for \"4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46\" returns successfully" Sep 12 17:46:22.779982 containerd[1714]: time="2025-09-12T17:46:22.779772932Z" level=info msg="RemovePodSandbox for \"4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46\"" Sep 12 17:46:22.779982 containerd[1714]: time="2025-09-12T17:46:22.779861132Z" level=info msg="Forcibly stopping sandbox \"4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46\"" Sep 12 17:46:22.797614 systemd[1]: run-containerd-runc-k8s.io-8887b789f0de2b5f0bb790b2bd61b558c8697f39293784ae9859ac8ca8dd849d-runc.tcJqj1.mount: Deactivated successfully. Sep 12 17:46:22.808404 systemd[1]: Started cri-containerd-8887b789f0de2b5f0bb790b2bd61b558c8697f39293784ae9859ac8ca8dd849d.scope - libcontainer container 8887b789f0de2b5f0bb790b2bd61b558c8697f39293784ae9859ac8ca8dd849d. Sep 12 17:46:22.865285 containerd[1714]: time="2025-09-12T17:46:22.865212682Z" level=info msg="StartContainer for \"8887b789f0de2b5f0bb790b2bd61b558c8697f39293784ae9859ac8ca8dd849d\" returns successfully" Sep 12 17:46:22.893095 containerd[1714]: 2025-09-12 17:46:22.845 [WARNING][5811] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--5bhwh-eth0", GenerateName:"calico-apiserver-868c8dbd57-", Namespace:"calico-apiserver", SelfLink:"", UID:"faa47950-5ad9-4518-9026-0ce0997471e9", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 45, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"868c8dbd57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-ca65cd0ccc", ContainerID:"5c099de3b4c477613d344486fcc56b518263e741c6a5e88e11b123a2ee86ea9b", Pod:"calico-apiserver-868c8dbd57-5bhwh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3a2f95f53a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:46:22.893095 containerd[1714]: 2025-09-12 17:46:22.846 [INFO][5811] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" Sep 12 17:46:22.893095 containerd[1714]: 2025-09-12 17:46:22.846 [INFO][5811] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" iface="eth0" netns="" Sep 12 17:46:22.893095 containerd[1714]: 2025-09-12 17:46:22.846 [INFO][5811] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" Sep 12 17:46:22.893095 containerd[1714]: 2025-09-12 17:46:22.846 [INFO][5811] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" Sep 12 17:46:22.893095 containerd[1714]: 2025-09-12 17:46:22.875 [INFO][5826] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" HandleID="k8s-pod-network.4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--5bhwh-eth0" Sep 12 17:46:22.893095 containerd[1714]: 2025-09-12 17:46:22.875 [INFO][5826] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:46:22.893095 containerd[1714]: 2025-09-12 17:46:22.875 [INFO][5826] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:46:22.893095 containerd[1714]: 2025-09-12 17:46:22.886 [WARNING][5826] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" HandleID="k8s-pod-network.4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--5bhwh-eth0" Sep 12 17:46:22.893095 containerd[1714]: 2025-09-12 17:46:22.886 [INFO][5826] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" HandleID="k8s-pod-network.4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--5bhwh-eth0" Sep 12 17:46:22.893095 containerd[1714]: 2025-09-12 17:46:22.888 [INFO][5826] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:46:22.893095 containerd[1714]: 2025-09-12 17:46:22.891 [INFO][5811] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46" Sep 12 17:46:22.896428 containerd[1714]: time="2025-09-12T17:46:22.893886266Z" level=info msg="TearDown network for sandbox \"4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46\" successfully" Sep 12 17:46:22.913150 containerd[1714]: time="2025-09-12T17:46:22.913099441Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:46:22.913285 containerd[1714]: time="2025-09-12T17:46:22.913171761Z" level=info msg="RemovePodSandbox \"4800a32dc0ece1e37f0196ac2e5a09864687a071daee93d8f8db184499212a46\" returns successfully" Sep 12 17:46:22.913696 containerd[1714]: time="2025-09-12T17:46:22.913668082Z" level=info msg="StopPodSandbox for \"e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa\"" Sep 12 17:46:23.019597 containerd[1714]: 2025-09-12 17:46:22.967 [WARNING][5851] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--x6pz9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"acb602f3-ad8a-4d55-aba0-1c4c66a93bb2", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 45, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-ca65cd0ccc", ContainerID:"fea34b80046dc8fbf477fb05fd660bd212c5d87cb53dd9c2cc2615a7bafce9b2", Pod:"coredns-674b8bbfcf-x6pz9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02120edfea8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:46:23.019597 containerd[1714]: 2025-09-12 17:46:22.968 [INFO][5851] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" Sep 12 17:46:23.019597 containerd[1714]: 2025-09-12 17:46:22.968 [INFO][5851] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" iface="eth0" netns="" Sep 12 17:46:23.019597 containerd[1714]: 2025-09-12 17:46:22.968 [INFO][5851] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" Sep 12 17:46:23.019597 containerd[1714]: 2025-09-12 17:46:22.968 [INFO][5851] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" Sep 12 17:46:23.019597 containerd[1714]: 2025-09-12 17:46:22.992 [INFO][5860] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" HandleID="k8s-pod-network.e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--x6pz9-eth0" Sep 12 17:46:23.019597 containerd[1714]: 2025-09-12 17:46:22.992 [INFO][5860] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:46:23.019597 containerd[1714]: 2025-09-12 17:46:22.992 [INFO][5860] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:46:23.019597 containerd[1714]: 2025-09-12 17:46:23.004 [WARNING][5860] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" HandleID="k8s-pod-network.e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--x6pz9-eth0" Sep 12 17:46:23.019597 containerd[1714]: 2025-09-12 17:46:23.004 [INFO][5860] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" HandleID="k8s-pod-network.e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--x6pz9-eth0" Sep 12 17:46:23.019597 containerd[1714]: 2025-09-12 17:46:23.010 [INFO][5860] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:46:23.019597 containerd[1714]: 2025-09-12 17:46:23.015 [INFO][5851] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" Sep 12 17:46:23.019597 containerd[1714]: time="2025-09-12T17:46:23.019453169Z" level=info msg="TearDown network for sandbox \"e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa\" successfully" Sep 12 17:46:23.019597 containerd[1714]: time="2025-09-12T17:46:23.019480009Z" level=info msg="StopPodSandbox for \"e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa\" returns successfully" Sep 12 17:46:23.024185 containerd[1714]: time="2025-09-12T17:46:23.022744572Z" level=info msg="RemovePodSandbox for \"e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa\"" Sep 12 17:46:23.024476 containerd[1714]: time="2025-09-12T17:46:23.024445173Z" level=info msg="Forcibly stopping sandbox \"e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa\"" Sep 12 17:46:23.113528 containerd[1714]: 2025-09-12 17:46:23.074 [WARNING][5874] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--x6pz9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"acb602f3-ad8a-4d55-aba0-1c4c66a93bb2", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 45, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-ca65cd0ccc", ContainerID:"fea34b80046dc8fbf477fb05fd660bd212c5d87cb53dd9c2cc2615a7bafce9b2", Pod:"coredns-674b8bbfcf-x6pz9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02120edfea8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:46:23.113528 containerd[1714]: 2025-09-12 17:46:23.074 [INFO][5874] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" Sep 12 17:46:23.113528 containerd[1714]: 2025-09-12 17:46:23.074 [INFO][5874] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" iface="eth0" netns="" Sep 12 17:46:23.113528 containerd[1714]: 2025-09-12 17:46:23.074 [INFO][5874] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" Sep 12 17:46:23.113528 containerd[1714]: 2025-09-12 17:46:23.074 [INFO][5874] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" Sep 12 17:46:23.113528 containerd[1714]: 2025-09-12 17:46:23.096 [INFO][5882] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" HandleID="k8s-pod-network.e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--x6pz9-eth0" Sep 12 17:46:23.113528 containerd[1714]: 2025-09-12 17:46:23.096 [INFO][5882] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:46:23.113528 containerd[1714]: 2025-09-12 17:46:23.096 [INFO][5882] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:46:23.113528 containerd[1714]: 2025-09-12 17:46:23.107 [WARNING][5882] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" HandleID="k8s-pod-network.e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--x6pz9-eth0" Sep 12 17:46:23.113528 containerd[1714]: 2025-09-12 17:46:23.107 [INFO][5882] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" HandleID="k8s-pod-network.e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-coredns--674b8bbfcf--x6pz9-eth0" Sep 12 17:46:23.113528 containerd[1714]: 2025-09-12 17:46:23.108 [INFO][5882] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:46:23.113528 containerd[1714]: 2025-09-12 17:46:23.111 [INFO][5874] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa" Sep 12 17:46:23.114607 containerd[1714]: time="2025-09-12T17:46:23.114449767Z" level=info msg="TearDown network for sandbox \"e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa\" successfully" Sep 12 17:46:23.123409 containerd[1714]: time="2025-09-12T17:46:23.123363054Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:46:23.123667 containerd[1714]: time="2025-09-12T17:46:23.123569935Z" level=info msg="RemovePodSandbox \"e7574ca7035713c5e387975709e4546cfa67b972fafaca77538d91d7bf3d6afa\" returns successfully" Sep 12 17:46:23.124092 containerd[1714]: time="2025-09-12T17:46:23.124055815Z" level=info msg="StopPodSandbox for \"a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90\"" Sep 12 17:46:23.199203 containerd[1714]: 2025-09-12 17:46:23.162 [WARNING][5896] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--ca65cd0ccc-k8s-goldmane--54d579b49d--slnk5-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"e27e3cc8-0ec3-48e6-8bfe-e729aef4366f", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 45, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-ca65cd0ccc", ContainerID:"58d382360a2f9dea4c3e2cb5687701cf51ec343d8ff800f1d42bb97cca156fb8", Pod:"goldmane-54d579b49d-slnk5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.21.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8eb75dc1396", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:46:23.199203 containerd[1714]: 2025-09-12 17:46:23.162 [INFO][5896] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" Sep 12 17:46:23.199203 containerd[1714]: 2025-09-12 17:46:23.162 [INFO][5896] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" iface="eth0" netns="" Sep 12 17:46:23.199203 containerd[1714]: 2025-09-12 17:46:23.162 [INFO][5896] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" Sep 12 17:46:23.199203 containerd[1714]: 2025-09-12 17:46:23.162 [INFO][5896] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" Sep 12 17:46:23.199203 containerd[1714]: 2025-09-12 17:46:23.185 [INFO][5903] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" HandleID="k8s-pod-network.a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-goldmane--54d579b49d--slnk5-eth0" Sep 12 17:46:23.199203 containerd[1714]: 2025-09-12 17:46:23.185 [INFO][5903] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:46:23.199203 containerd[1714]: 2025-09-12 17:46:23.185 [INFO][5903] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:46:23.199203 containerd[1714]: 2025-09-12 17:46:23.194 [WARNING][5903] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" HandleID="k8s-pod-network.a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-goldmane--54d579b49d--slnk5-eth0" Sep 12 17:46:23.199203 containerd[1714]: 2025-09-12 17:46:23.194 [INFO][5903] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" HandleID="k8s-pod-network.a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-goldmane--54d579b49d--slnk5-eth0" Sep 12 17:46:23.199203 containerd[1714]: 2025-09-12 17:46:23.196 [INFO][5903] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:46:23.199203 containerd[1714]: 2025-09-12 17:46:23.197 [INFO][5896] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" Sep 12 17:46:23.200136 containerd[1714]: time="2025-09-12T17:46:23.199338117Z" level=info msg="TearDown network for sandbox \"a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90\" successfully" Sep 12 17:46:23.200136 containerd[1714]: time="2025-09-12T17:46:23.199372677Z" level=info msg="StopPodSandbox for \"a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90\" returns successfully" Sep 12 17:46:23.201300 containerd[1714]: time="2025-09-12T17:46:23.200393158Z" level=info msg="RemovePodSandbox for \"a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90\"" Sep 12 17:46:23.201300 containerd[1714]: time="2025-09-12T17:46:23.200441638Z" level=info msg="Forcibly stopping sandbox \"a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90\"" Sep 12 17:46:23.274411 containerd[1714]: 2025-09-12 17:46:23.235 [WARNING][5917] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--ca65cd0ccc-k8s-goldmane--54d579b49d--slnk5-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"e27e3cc8-0ec3-48e6-8bfe-e729aef4366f", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 45, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-ca65cd0ccc", ContainerID:"58d382360a2f9dea4c3e2cb5687701cf51ec343d8ff800f1d42bb97cca156fb8", Pod:"goldmane-54d579b49d-slnk5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.21.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8eb75dc1396", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:46:23.274411 containerd[1714]: 2025-09-12 17:46:23.237 [INFO][5917] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" Sep 12 17:46:23.274411 containerd[1714]: 2025-09-12 17:46:23.237 [INFO][5917] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" iface="eth0" netns="" Sep 12 17:46:23.274411 containerd[1714]: 2025-09-12 17:46:23.237 [INFO][5917] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" Sep 12 17:46:23.274411 containerd[1714]: 2025-09-12 17:46:23.237 [INFO][5917] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" Sep 12 17:46:23.274411 containerd[1714]: 2025-09-12 17:46:23.259 [INFO][5924] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" HandleID="k8s-pod-network.a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-goldmane--54d579b49d--slnk5-eth0" Sep 12 17:46:23.274411 containerd[1714]: 2025-09-12 17:46:23.259 [INFO][5924] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:46:23.274411 containerd[1714]: 2025-09-12 17:46:23.259 [INFO][5924] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:46:23.274411 containerd[1714]: 2025-09-12 17:46:23.269 [WARNING][5924] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" HandleID="k8s-pod-network.a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-goldmane--54d579b49d--slnk5-eth0" Sep 12 17:46:23.274411 containerd[1714]: 2025-09-12 17:46:23.269 [INFO][5924] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" HandleID="k8s-pod-network.a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-goldmane--54d579b49d--slnk5-eth0" Sep 12 17:46:23.274411 containerd[1714]: 2025-09-12 17:46:23.271 [INFO][5924] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:46:23.274411 containerd[1714]: 2025-09-12 17:46:23.272 [INFO][5917] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90" Sep 12 17:46:23.274411 containerd[1714]: time="2025-09-12T17:46:23.274282259Z" level=info msg="TearDown network for sandbox \"a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90\" successfully" Sep 12 17:46:23.284144 containerd[1714]: time="2025-09-12T17:46:23.283888027Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:46:23.284144 containerd[1714]: time="2025-09-12T17:46:23.283967787Z" level=info msg="RemovePodSandbox \"a61e7c042aef263e702f8b25fca4f7bb7a7092206684b86b7609f3484710cf90\" returns successfully" Sep 12 17:46:23.284928 containerd[1714]: time="2025-09-12T17:46:23.284632547Z" level=info msg="StopPodSandbox for \"ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55\"" Sep 12 17:46:23.374600 containerd[1714]: 2025-09-12 17:46:23.342 [WARNING][5939] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--ca65cd0ccc-k8s-calico--kube--controllers--84df6476b7--wbsmk-eth0", GenerateName:"calico-kube-controllers-84df6476b7-", Namespace:"calico-system", SelfLink:"", UID:"cb0990fd-3735-45b3-86b2-04ad783e7243", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 45, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84df6476b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-ca65cd0ccc", ContainerID:"059cb8cae07a39ed2f0e74e8f0aaf856c4b539ee0049600e2cabd1433685421d", Pod:"calico-kube-controllers-84df6476b7-wbsmk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.21.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali231bd9f4ebb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:46:23.374600 containerd[1714]: 2025-09-12 17:46:23.342 [INFO][5939] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" Sep 12 17:46:23.374600 containerd[1714]: 2025-09-12 17:46:23.342 [INFO][5939] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" iface="eth0" netns="" Sep 12 17:46:23.374600 containerd[1714]: 2025-09-12 17:46:23.342 [INFO][5939] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" Sep 12 17:46:23.374600 containerd[1714]: 2025-09-12 17:46:23.342 [INFO][5939] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" Sep 12 17:46:23.374600 containerd[1714]: 2025-09-12 17:46:23.361 [INFO][5946] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" HandleID="k8s-pod-network.ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--kube--controllers--84df6476b7--wbsmk-eth0" Sep 12 17:46:23.374600 containerd[1714]: 2025-09-12 17:46:23.361 [INFO][5946] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:46:23.374600 containerd[1714]: 2025-09-12 17:46:23.361 [INFO][5946] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:46:23.374600 containerd[1714]: 2025-09-12 17:46:23.370 [WARNING][5946] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" HandleID="k8s-pod-network.ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--kube--controllers--84df6476b7--wbsmk-eth0" Sep 12 17:46:23.374600 containerd[1714]: 2025-09-12 17:46:23.370 [INFO][5946] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" HandleID="k8s-pod-network.ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--kube--controllers--84df6476b7--wbsmk-eth0" Sep 12 17:46:23.374600 containerd[1714]: 2025-09-12 17:46:23.371 [INFO][5946] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:46:23.374600 containerd[1714]: 2025-09-12 17:46:23.373 [INFO][5939] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" Sep 12 17:46:23.375773 containerd[1714]: time="2025-09-12T17:46:23.375112102Z" level=info msg="TearDown network for sandbox \"ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55\" successfully" Sep 12 17:46:23.375773 containerd[1714]: time="2025-09-12T17:46:23.375243222Z" level=info msg="StopPodSandbox for \"ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55\" returns successfully" Sep 12 17:46:23.378223 containerd[1714]: time="2025-09-12T17:46:23.377763104Z" level=info msg="RemovePodSandbox for \"ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55\"" Sep 12 17:46:23.378223 containerd[1714]: time="2025-09-12T17:46:23.377797504Z" level=info msg="Forcibly stopping sandbox \"ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55\"" Sep 12 17:46:23.465932 containerd[1714]: 2025-09-12 17:46:23.417 [WARNING][5960] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--ca65cd0ccc-k8s-calico--kube--controllers--84df6476b7--wbsmk-eth0", GenerateName:"calico-kube-controllers-84df6476b7-", Namespace:"calico-system", SelfLink:"", UID:"cb0990fd-3735-45b3-86b2-04ad783e7243", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 45, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84df6476b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-ca65cd0ccc", ContainerID:"059cb8cae07a39ed2f0e74e8f0aaf856c4b539ee0049600e2cabd1433685421d", Pod:"calico-kube-controllers-84df6476b7-wbsmk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.21.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali231bd9f4ebb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:46:23.465932 containerd[1714]: 2025-09-12 17:46:23.417 [INFO][5960] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" Sep 12 17:46:23.465932 containerd[1714]: 2025-09-12 17:46:23.417 [INFO][5960] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" iface="eth0" netns="" Sep 12 17:46:23.465932 containerd[1714]: 2025-09-12 17:46:23.417 [INFO][5960] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" Sep 12 17:46:23.465932 containerd[1714]: 2025-09-12 17:46:23.417 [INFO][5960] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" Sep 12 17:46:23.465932 containerd[1714]: 2025-09-12 17:46:23.450 [INFO][5967] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" HandleID="k8s-pod-network.ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--kube--controllers--84df6476b7--wbsmk-eth0" Sep 12 17:46:23.465932 containerd[1714]: 2025-09-12 17:46:23.450 [INFO][5967] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:46:23.465932 containerd[1714]: 2025-09-12 17:46:23.450 [INFO][5967] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:46:23.465932 containerd[1714]: 2025-09-12 17:46:23.458 [WARNING][5967] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" HandleID="k8s-pod-network.ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--kube--controllers--84df6476b7--wbsmk-eth0" Sep 12 17:46:23.465932 containerd[1714]: 2025-09-12 17:46:23.458 [INFO][5967] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" HandleID="k8s-pod-network.ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--kube--controllers--84df6476b7--wbsmk-eth0" Sep 12 17:46:23.465932 containerd[1714]: 2025-09-12 17:46:23.460 [INFO][5967] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:46:23.465932 containerd[1714]: 2025-09-12 17:46:23.462 [INFO][5960] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55" Sep 12 17:46:23.465932 containerd[1714]: time="2025-09-12T17:46:23.465742656Z" level=info msg="TearDown network for sandbox \"ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55\" successfully" Sep 12 17:46:24.435370 containerd[1714]: time="2025-09-12T17:46:24.435312014Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:46:24.435810 containerd[1714]: time="2025-09-12T17:46:24.435395494Z" level=info msg="RemovePodSandbox \"ba4f512a2fc1ac29afedd0f5e54eb79f9951c31c8fb924d51640398aff809e55\" returns successfully" Sep 12 17:46:24.440209 containerd[1714]: time="2025-09-12T17:46:24.439855778Z" level=info msg="StopPodSandbox for \"4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2\"" Sep 12 17:46:24.541423 containerd[1714]: 2025-09-12 17:46:24.494 [WARNING][5985] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-whisker--64cb98bb48--57t2s-eth0" Sep 12 17:46:24.541423 containerd[1714]: 2025-09-12 17:46:24.495 [INFO][5985] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" Sep 12 17:46:24.541423 containerd[1714]: 2025-09-12 17:46:24.495 [INFO][5985] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" iface="eth0" netns="" Sep 12 17:46:24.541423 containerd[1714]: 2025-09-12 17:46:24.495 [INFO][5985] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" Sep 12 17:46:24.541423 containerd[1714]: 2025-09-12 17:46:24.495 [INFO][5985] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" Sep 12 17:46:24.541423 containerd[1714]: 2025-09-12 17:46:24.516 [INFO][5993] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" HandleID="k8s-pod-network.4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-whisker--64cb98bb48--57t2s-eth0" Sep 12 17:46:24.541423 containerd[1714]: 2025-09-12 17:46:24.516 [INFO][5993] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:46:24.541423 containerd[1714]: 2025-09-12 17:46:24.516 [INFO][5993] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:46:24.541423 containerd[1714]: 2025-09-12 17:46:24.534 [WARNING][5993] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" HandleID="k8s-pod-network.4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-whisker--64cb98bb48--57t2s-eth0" Sep 12 17:46:24.541423 containerd[1714]: 2025-09-12 17:46:24.535 [INFO][5993] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" HandleID="k8s-pod-network.4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-whisker--64cb98bb48--57t2s-eth0" Sep 12 17:46:24.541423 containerd[1714]: 2025-09-12 17:46:24.536 [INFO][5993] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:46:24.541423 containerd[1714]: 2025-09-12 17:46:24.539 [INFO][5985] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" Sep 12 17:46:24.542819 containerd[1714]: time="2025-09-12T17:46:24.541873182Z" level=info msg="TearDown network for sandbox \"4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2\" successfully" Sep 12 17:46:24.542819 containerd[1714]: time="2025-09-12T17:46:24.542697063Z" level=info msg="StopPodSandbox for \"4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2\" returns successfully" Sep 12 17:46:24.544347 containerd[1714]: time="2025-09-12T17:46:24.543378663Z" level=info msg="RemovePodSandbox for \"4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2\"" Sep 12 17:46:24.544347 containerd[1714]: time="2025-09-12T17:46:24.543475583Z" level=info msg="Forcibly stopping sandbox \"4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2\"" Sep 12 17:46:24.589274 kubelet[3184]: I0912 17:46:24.588591 3184 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:46:24.633271 containerd[1714]: 2025-09-12 17:46:24.590 [WARNING][6008] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" WorkloadEndpoint="ci--4081.3.6--a--ca65cd0ccc-k8s-whisker--64cb98bb48--57t2s-eth0" Sep 12 17:46:24.633271 containerd[1714]: 2025-09-12 17:46:24.590 [INFO][6008] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" Sep 12 17:46:24.633271 containerd[1714]: 2025-09-12 17:46:24.590 [INFO][6008] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" iface="eth0" netns="" Sep 12 17:46:24.633271 containerd[1714]: 2025-09-12 17:46:24.590 [INFO][6008] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" Sep 12 17:46:24.633271 containerd[1714]: 2025-09-12 17:46:24.590 [INFO][6008] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" Sep 12 17:46:24.633271 containerd[1714]: 2025-09-12 17:46:24.617 [INFO][6015] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" HandleID="k8s-pod-network.4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-whisker--64cb98bb48--57t2s-eth0" Sep 12 17:46:24.633271 containerd[1714]: 2025-09-12 17:46:24.618 [INFO][6015] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:46:24.633271 containerd[1714]: 2025-09-12 17:46:24.618 [INFO][6015] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:46:24.633271 containerd[1714]: 2025-09-12 17:46:24.627 [WARNING][6015] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" HandleID="k8s-pod-network.4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-whisker--64cb98bb48--57t2s-eth0" Sep 12 17:46:24.633271 containerd[1714]: 2025-09-12 17:46:24.627 [INFO][6015] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" HandleID="k8s-pod-network.4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-whisker--64cb98bb48--57t2s-eth0" Sep 12 17:46:24.633271 containerd[1714]: 2025-09-12 17:46:24.629 [INFO][6015] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:46:24.633271 containerd[1714]: 2025-09-12 17:46:24.631 [INFO][6008] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2" Sep 12 17:46:24.633271 containerd[1714]: time="2025-09-12T17:46:24.632899977Z" level=info msg="TearDown network for sandbox \"4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2\" successfully" Sep 12 17:46:24.642242 containerd[1714]: time="2025-09-12T17:46:24.642182265Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:46:24.642374 containerd[1714]: time="2025-09-12T17:46:24.642294225Z" level=info msg="RemovePodSandbox \"4f641c9c531fe7560f6396fa91ec0d1ce4d9dad8ccc213245989b0aa5cba8ab2\" returns successfully" Sep 12 17:46:24.642844 containerd[1714]: time="2025-09-12T17:46:24.642815625Z" level=info msg="StopPodSandbox for \"6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a\"" Sep 12 17:46:24.712972 containerd[1714]: 2025-09-12 17:46:24.678 [WARNING][6029] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--ca65cd0ccc-k8s-csi--node--driver--jjztm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"64c67752-89ee-4f26-b63e-b37e41be4790", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 45, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-ca65cd0ccc", ContainerID:"ed4c973e8b3a9e3413e0f6e2da06b4c9afc03ca940400b8d31dc588f1910975d", Pod:"csi-node-driver-jjztm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.21.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali15b548155ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:46:24.712972 containerd[1714]: 2025-09-12 17:46:24.678 [INFO][6029] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" Sep 12 17:46:24.712972 containerd[1714]: 2025-09-12 17:46:24.678 [INFO][6029] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" iface="eth0" netns="" Sep 12 17:46:24.712972 containerd[1714]: 2025-09-12 17:46:24.678 [INFO][6029] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" Sep 12 17:46:24.712972 containerd[1714]: 2025-09-12 17:46:24.678 [INFO][6029] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" Sep 12 17:46:24.712972 containerd[1714]: 2025-09-12 17:46:24.698 [INFO][6036] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" HandleID="k8s-pod-network.6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-csi--node--driver--jjztm-eth0" Sep 12 17:46:24.712972 containerd[1714]: 2025-09-12 17:46:24.698 [INFO][6036] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:46:24.712972 containerd[1714]: 2025-09-12 17:46:24.698 [INFO][6036] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:46:24.712972 containerd[1714]: 2025-09-12 17:46:24.707 [WARNING][6036] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" HandleID="k8s-pod-network.6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-csi--node--driver--jjztm-eth0" Sep 12 17:46:24.712972 containerd[1714]: 2025-09-12 17:46:24.707 [INFO][6036] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" HandleID="k8s-pod-network.6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-csi--node--driver--jjztm-eth0" Sep 12 17:46:24.712972 containerd[1714]: 2025-09-12 17:46:24.709 [INFO][6036] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:46:24.712972 containerd[1714]: 2025-09-12 17:46:24.711 [INFO][6029] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" Sep 12 17:46:24.713995 containerd[1714]: time="2025-09-12T17:46:24.712937923Z" level=info msg="TearDown network for sandbox \"6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a\" successfully" Sep 12 17:46:24.714098 containerd[1714]: time="2025-09-12T17:46:24.713994404Z" level=info msg="StopPodSandbox for \"6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a\" returns successfully" Sep 12 17:46:24.714666 containerd[1714]: time="2025-09-12T17:46:24.714636684Z" level=info msg="RemovePodSandbox for \"6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a\"" Sep 12 17:46:24.714742 containerd[1714]: time="2025-09-12T17:46:24.714673364Z" level=info msg="Forcibly stopping sandbox \"6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a\"" Sep 12 17:46:24.784989 containerd[1714]: 2025-09-12 17:46:24.752 [WARNING][6050] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--ca65cd0ccc-k8s-csi--node--driver--jjztm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"64c67752-89ee-4f26-b63e-b37e41be4790", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 45, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-ca65cd0ccc", ContainerID:"ed4c973e8b3a9e3413e0f6e2da06b4c9afc03ca940400b8d31dc588f1910975d", Pod:"csi-node-driver-jjztm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.21.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali15b548155ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:46:24.784989 containerd[1714]: 2025-09-12 17:46:24.752 [INFO][6050] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" Sep 12 17:46:24.784989 containerd[1714]: 2025-09-12 17:46:24.752 [INFO][6050] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" iface="eth0" netns="" Sep 12 17:46:24.784989 containerd[1714]: 2025-09-12 17:46:24.752 [INFO][6050] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" Sep 12 17:46:24.784989 containerd[1714]: 2025-09-12 17:46:24.752 [INFO][6050] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" Sep 12 17:46:24.784989 containerd[1714]: 2025-09-12 17:46:24.770 [INFO][6057] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" HandleID="k8s-pod-network.6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-csi--node--driver--jjztm-eth0" Sep 12 17:46:24.784989 containerd[1714]: 2025-09-12 17:46:24.770 [INFO][6057] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:46:24.784989 containerd[1714]: 2025-09-12 17:46:24.770 [INFO][6057] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:46:24.784989 containerd[1714]: 2025-09-12 17:46:24.780 [WARNING][6057] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" HandleID="k8s-pod-network.6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-csi--node--driver--jjztm-eth0" Sep 12 17:46:24.784989 containerd[1714]: 2025-09-12 17:46:24.780 [INFO][6057] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" HandleID="k8s-pod-network.6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-csi--node--driver--jjztm-eth0" Sep 12 17:46:24.784989 containerd[1714]: 2025-09-12 17:46:24.782 [INFO][6057] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:46:24.784989 containerd[1714]: 2025-09-12 17:46:24.783 [INFO][6050] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a" Sep 12 17:46:24.785407 containerd[1714]: time="2025-09-12T17:46:24.785047222Z" level=info msg="TearDown network for sandbox \"6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a\" successfully" Sep 12 17:46:24.814691 containerd[1714]: time="2025-09-12T17:46:24.814612927Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:46:24.814869 containerd[1714]: time="2025-09-12T17:46:24.814750447Z" level=info msg="RemovePodSandbox \"6231a9efd0cb51838e9a568fa8e5f840024bbba691fddd4da61c2d8ed6a2268a\" returns successfully" Sep 12 17:46:24.815401 containerd[1714]: time="2025-09-12T17:46:24.815374727Z" level=info msg="StopPodSandbox for \"173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e\"" Sep 12 17:46:24.887505 containerd[1714]: 2025-09-12 17:46:24.853 [WARNING][6071] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--s6vg4-eth0", GenerateName:"calico-apiserver-868c8dbd57-", Namespace:"calico-apiserver", SelfLink:"", UID:"bbd20681-1f6f-410b-806a-7fe01c4c3226", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 45, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"868c8dbd57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-ca65cd0ccc", ContainerID:"5ccf3d7f1bd3b1750e05760933337c3a80dae8095b11eaddc99c8ad8ec6e12b4", Pod:"calico-apiserver-868c8dbd57-s6vg4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali27c0a359e7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:46:24.887505 containerd[1714]: 2025-09-12 17:46:24.854 [INFO][6071] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" Sep 12 17:46:24.887505 containerd[1714]: 2025-09-12 17:46:24.854 [INFO][6071] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" iface="eth0" netns="" Sep 12 17:46:24.887505 containerd[1714]: 2025-09-12 17:46:24.854 [INFO][6071] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" Sep 12 17:46:24.887505 containerd[1714]: 2025-09-12 17:46:24.854 [INFO][6071] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" Sep 12 17:46:24.887505 containerd[1714]: 2025-09-12 17:46:24.873 [INFO][6078] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" HandleID="k8s-pod-network.173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--s6vg4-eth0" Sep 12 17:46:24.887505 containerd[1714]: 2025-09-12 17:46:24.873 [INFO][6078] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:46:24.887505 containerd[1714]: 2025-09-12 17:46:24.873 [INFO][6078] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:46:24.887505 containerd[1714]: 2025-09-12 17:46:24.882 [WARNING][6078] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" HandleID="k8s-pod-network.173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--s6vg4-eth0" Sep 12 17:46:24.887505 containerd[1714]: 2025-09-12 17:46:24.882 [INFO][6078] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" HandleID="k8s-pod-network.173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--s6vg4-eth0" Sep 12 17:46:24.887505 containerd[1714]: 2025-09-12 17:46:24.884 [INFO][6078] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:46:24.887505 containerd[1714]: 2025-09-12 17:46:24.885 [INFO][6071] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" Sep 12 17:46:24.887919 containerd[1714]: time="2025-09-12T17:46:24.887543462Z" level=info msg="TearDown network for sandbox \"173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e\" successfully" Sep 12 17:46:24.887919 containerd[1714]: time="2025-09-12T17:46:24.887570022Z" level=info msg="StopPodSandbox for \"173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e\" returns successfully" Sep 12 17:46:24.888348 containerd[1714]: time="2025-09-12T17:46:24.888321503Z" level=info msg="RemovePodSandbox for \"173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e\"" Sep 12 17:46:24.888413 containerd[1714]: time="2025-09-12T17:46:24.888354983Z" level=info msg="Forcibly stopping sandbox \"173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e\"" Sep 12 17:46:24.965483 containerd[1714]: 2025-09-12 17:46:24.923 [WARNING][6092] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--s6vg4-eth0", GenerateName:"calico-apiserver-868c8dbd57-", Namespace:"calico-apiserver", SelfLink:"", UID:"bbd20681-1f6f-410b-806a-7fe01c4c3226", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 45, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"868c8dbd57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-ca65cd0ccc", ContainerID:"5ccf3d7f1bd3b1750e05760933337c3a80dae8095b11eaddc99c8ad8ec6e12b4", Pod:"calico-apiserver-868c8dbd57-s6vg4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali27c0a359e7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:46:24.965483 containerd[1714]: 2025-09-12 17:46:24.923 [INFO][6092] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" Sep 12 17:46:24.965483 containerd[1714]: 2025-09-12 17:46:24.923 [INFO][6092] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" iface="eth0" netns="" Sep 12 17:46:24.965483 containerd[1714]: 2025-09-12 17:46:24.923 [INFO][6092] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" Sep 12 17:46:24.965483 containerd[1714]: 2025-09-12 17:46:24.923 [INFO][6092] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" Sep 12 17:46:24.965483 containerd[1714]: 2025-09-12 17:46:24.951 [INFO][6099] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" HandleID="k8s-pod-network.173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--s6vg4-eth0" Sep 12 17:46:24.965483 containerd[1714]: 2025-09-12 17:46:24.951 [INFO][6099] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:46:24.965483 containerd[1714]: 2025-09-12 17:46:24.951 [INFO][6099] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:46:24.965483 containerd[1714]: 2025-09-12 17:46:24.960 [WARNING][6099] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" HandleID="k8s-pod-network.173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--s6vg4-eth0" Sep 12 17:46:24.965483 containerd[1714]: 2025-09-12 17:46:24.960 [INFO][6099] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" HandleID="k8s-pod-network.173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" Workload="ci--4081.3.6--a--ca65cd0ccc-k8s-calico--apiserver--868c8dbd57--s6vg4-eth0" Sep 12 17:46:24.965483 containerd[1714]: 2025-09-12 17:46:24.962 [INFO][6099] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:46:24.965483 containerd[1714]: 2025-09-12 17:46:24.963 [INFO][6092] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e" Sep 12 17:46:24.965483 containerd[1714]: time="2025-09-12T17:46:24.965336720Z" level=info msg="TearDown network for sandbox \"173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e\" successfully" Sep 12 17:46:24.975649 containerd[1714]: time="2025-09-12T17:46:24.975586527Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:46:24.975822 containerd[1714]: time="2025-09-12T17:46:24.975663527Z" level=info msg="RemovePodSandbox \"173fd8c2d24df8fdb0ea4d253630de5c941845ea80bc13b84b802583e715ba1e\" returns successfully" Sep 12 17:46:25.805570 containerd[1714]: time="2025-09-12T17:46:25.805513659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:25.809698 containerd[1714]: time="2025-09-12T17:46:25.809541742Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 12 17:46:25.813690 containerd[1714]: time="2025-09-12T17:46:25.813414425Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:25.819949 containerd[1714]: time="2025-09-12T17:46:25.819898670Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:25.820897 containerd[1714]: time="2025-09-12T17:46:25.820858471Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 3.617513654s" Sep 12 17:46:25.821094 containerd[1714]: time="2025-09-12T17:46:25.820998791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 12 17:46:25.823192 containerd[1714]: time="2025-09-12T17:46:25.823156792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:46:25.830383 containerd[1714]: time="2025-09-12T17:46:25.830222477Z" level=info msg="CreateContainer within sandbox \"ed4c973e8b3a9e3413e0f6e2da06b4c9afc03ca940400b8d31dc588f1910975d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:46:25.885596 containerd[1714]: time="2025-09-12T17:46:25.885546518Z" level=info msg="CreateContainer within sandbox \"ed4c973e8b3a9e3413e0f6e2da06b4c9afc03ca940400b8d31dc588f1910975d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c4f4c795b531589d14e0d5bf308457c37d440ea4bef57df48e18d2eae6297db8\"" Sep 12 17:46:25.886652 containerd[1714]: time="2025-09-12T17:46:25.886599919Z" level=info msg="StartContainer for \"c4f4c795b531589d14e0d5bf308457c37d440ea4bef57df48e18d2eae6297db8\"" Sep 12 17:46:25.930487 systemd[1]: Started cri-containerd-c4f4c795b531589d14e0d5bf308457c37d440ea4bef57df48e18d2eae6297db8.scope - libcontainer container c4f4c795b531589d14e0d5bf308457c37d440ea4bef57df48e18d2eae6297db8. Sep 12 17:46:25.968202 containerd[1714]: time="2025-09-12T17:46:25.967928459Z" level=info msg="StartContainer for \"c4f4c795b531589d14e0d5bf308457c37d440ea4bef57df48e18d2eae6297db8\" returns successfully" Sep 12 17:46:26.175845 containerd[1714]: time="2025-09-12T17:46:26.175721012Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:26.182265 containerd[1714]: time="2025-09-12T17:46:26.182069857Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:46:26.184347 containerd[1714]: time="2025-09-12T17:46:26.184215658Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 361.019946ms" Sep 12 17:46:26.184347 containerd[1714]: time="2025-09-12T17:46:26.184279338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 17:46:26.185528 containerd[1714]: time="2025-09-12T17:46:26.185166179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:46:26.192304 containerd[1714]: time="2025-09-12T17:46:26.192269664Z" level=info msg="CreateContainer within sandbox \"5ccf3d7f1bd3b1750e05760933337c3a80dae8095b11eaddc99c8ad8ec6e12b4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:46:26.228305 containerd[1714]: time="2025-09-12T17:46:26.228187851Z" level=info msg="CreateContainer within sandbox \"5ccf3d7f1bd3b1750e05760933337c3a80dae8095b11eaddc99c8ad8ec6e12b4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4b67e05bc021a87e39f56d1d383224bd69c37513001b63724d7dd10628e410b4\"" Sep 12 17:46:26.229077 containerd[1714]: time="2025-09-12T17:46:26.229037611Z" level=info msg="StartContainer for \"4b67e05bc021a87e39f56d1d383224bd69c37513001b63724d7dd10628e410b4\"" Sep 12 17:46:26.263467 systemd[1]: Started cri-containerd-4b67e05bc021a87e39f56d1d383224bd69c37513001b63724d7dd10628e410b4.scope - libcontainer container 4b67e05bc021a87e39f56d1d383224bd69c37513001b63724d7dd10628e410b4. Sep 12 17:46:26.303898 containerd[1714]: time="2025-09-12T17:46:26.303094266Z" level=info msg="StartContainer for \"4b67e05bc021a87e39f56d1d383224bd69c37513001b63724d7dd10628e410b4\" returns successfully" Sep 12 17:46:26.615360 kubelet[3184]: I0912 17:46:26.615298 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-868c8dbd57-s6vg4" podStartSLOduration=36.876987097 podStartE2EDuration="48.615281256s" podCreationTimestamp="2025-09-12 17:45:38 +0000 UTC" firstStartedPulling="2025-09-12 17:46:14.44671738 +0000 UTC m=+52.302345847" lastFinishedPulling="2025-09-12 17:46:26.185011539 +0000 UTC m=+64.040640006" observedRunningTime="2025-09-12 17:46:26.614112335 +0000 UTC m=+64.469740802" watchObservedRunningTime="2025-09-12 17:46:26.615281256 +0000 UTC m=+64.470909723" Sep 12 17:46:26.615805 kubelet[3184]: I0912 17:46:26.615631 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-868c8dbd57-5bhwh" podStartSLOduration=40.344189265 podStartE2EDuration="48.615625416s" podCreationTimestamp="2025-09-12 17:45:38 +0000 UTC" firstStartedPulling="2025-09-12 17:46:13.930913985 +0000 UTC m=+51.786542452" lastFinishedPulling="2025-09-12 17:46:22.202350176 +0000 UTC m=+60.057978603" observedRunningTime="2025-09-12 17:46:23.60381821 +0000 UTC m=+61.459446677" watchObservedRunningTime="2025-09-12 17:46:26.615625416 +0000 UTC m=+64.471253843" Sep 12 17:46:27.608912 kubelet[3184]: I0912 17:46:27.608876 3184 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:46:30.065962 containerd[1714]: time="2025-09-12T17:46:30.065214840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:30.073136 containerd[1714]: time="2025-09-12T17:46:30.072847645Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 12 17:46:30.078908 containerd[1714]: time="2025-09-12T17:46:30.078870690Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:30.084126 containerd[1714]: time="2025-09-12T17:46:30.084086854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:30.084861 containerd[1714]: time="2025-09-12T17:46:30.084823774Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 3.899626315s" Sep 12 17:46:30.084861 containerd[1714]: time="2025-09-12T17:46:30.084857614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 12 17:46:30.087056 containerd[1714]: time="2025-09-12T17:46:30.086881976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:46:30.107552 containerd[1714]: time="2025-09-12T17:46:30.107509351Z" level=info msg="CreateContainer within sandbox \"059cb8cae07a39ed2f0e74e8f0aaf856c4b539ee0049600e2cabd1433685421d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:46:30.146200 containerd[1714]: time="2025-09-12T17:46:30.146128019Z" level=info msg="CreateContainer within sandbox \"059cb8cae07a39ed2f0e74e8f0aaf856c4b539ee0049600e2cabd1433685421d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"96611bb3e6a77e476830fdb7b99f695309dc650b44402a9dd34fc1d291661146\"" Sep 12 17:46:30.147108 containerd[1714]: time="2025-09-12T17:46:30.147067260Z" level=info msg="StartContainer for \"96611bb3e6a77e476830fdb7b99f695309dc650b44402a9dd34fc1d291661146\"" Sep 12 17:46:30.181529 systemd[1]: Started cri-containerd-96611bb3e6a77e476830fdb7b99f695309dc650b44402a9dd34fc1d291661146.scope - libcontainer container 96611bb3e6a77e476830fdb7b99f695309dc650b44402a9dd34fc1d291661146. Sep 12 17:46:30.222407 containerd[1714]: time="2025-09-12T17:46:30.222278716Z" level=info msg="StartContainer for \"96611bb3e6a77e476830fdb7b99f695309dc650b44402a9dd34fc1d291661146\" returns successfully" Sep 12 17:46:30.643040 kubelet[3184]: I0912 17:46:30.642879 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-84df6476b7-wbsmk" podStartSLOduration=29.518032106 podStartE2EDuration="44.642858506s" podCreationTimestamp="2025-09-12 17:45:46 +0000 UTC" firstStartedPulling="2025-09-12 17:46:14.961338895 +0000 UTC m=+52.816967362" lastFinishedPulling="2025-09-12 17:46:30.086165295 +0000 UTC m=+67.941793762" observedRunningTime="2025-09-12 17:46:30.642365625 +0000 UTC m=+68.497994052" watchObservedRunningTime="2025-09-12 17:46:30.642858506 +0000 UTC m=+68.498486973" Sep 12 17:46:31.410407 containerd[1714]: time="2025-09-12T17:46:31.410350072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:31.414149 containerd[1714]: time="2025-09-12T17:46:31.413991314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 12 17:46:31.419382 containerd[1714]: time="2025-09-12T17:46:31.419029278Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:31.425294 containerd[1714]: time="2025-09-12T17:46:31.425211083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:31.426223 containerd[1714]: time="2025-09-12T17:46:31.426165243Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.339242787s" Sep 12 17:46:31.426299 containerd[1714]: time="2025-09-12T17:46:31.426252963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 12 17:46:31.437973 containerd[1714]: time="2025-09-12T17:46:31.437926812Z" level=info msg="CreateContainer within sandbox \"ed4c973e8b3a9e3413e0f6e2da06b4c9afc03ca940400b8d31dc588f1910975d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:46:31.486195 containerd[1714]: time="2025-09-12T17:46:31.486142047Z" level=info msg="CreateContainer within sandbox \"ed4c973e8b3a9e3413e0f6e2da06b4c9afc03ca940400b8d31dc588f1910975d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a5da23f156d5f9c5f151404c3be184dee052a479b6d58ce4525edc5366948ace\"" Sep 12 17:46:31.487737 containerd[1714]: time="2025-09-12T17:46:31.487534208Z" level=info msg="StartContainer for \"a5da23f156d5f9c5f151404c3be184dee052a479b6d58ce4525edc5366948ace\"" Sep 12 17:46:31.524456 systemd[1]: Started cri-containerd-a5da23f156d5f9c5f151404c3be184dee052a479b6d58ce4525edc5366948ace.scope - libcontainer container a5da23f156d5f9c5f151404c3be184dee052a479b6d58ce4525edc5366948ace. Sep 12 17:46:31.579294 containerd[1714]: time="2025-09-12T17:46:31.578671756Z" level=info msg="StartContainer for \"a5da23f156d5f9c5f151404c3be184dee052a479b6d58ce4525edc5366948ace\" returns successfully" Sep 12 17:46:32.388465 kubelet[3184]: I0912 17:46:32.388385 3184 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:46:32.393173 kubelet[3184]: I0912 17:46:32.393041 3184 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:46:37.569297 kubelet[3184]: I0912 17:46:37.568522 3184 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-jjztm" podStartSLOduration=34.532379371 podStartE2EDuration="51.568505042s" podCreationTimestamp="2025-09-12 17:45:46 +0000 UTC" firstStartedPulling="2025-09-12 17:46:14.391033853 +0000 UTC m=+52.246662320" lastFinishedPulling="2025-09-12 17:46:31.427159524 +0000 UTC m=+69.282787991" observedRunningTime="2025-09-12 17:46:31.648832087 +0000 UTC m=+69.504460554" watchObservedRunningTime="2025-09-12 17:46:37.568505042 +0000 UTC m=+75.424133509" Sep 12 17:46:40.364043 systemd[1]: run-containerd-runc-k8s.io-96611bb3e6a77e476830fdb7b99f695309dc650b44402a9dd34fc1d291661146-runc.zkzz8Y.mount: Deactivated successfully. Sep 12 17:46:41.675684 systemd[1]: Started sshd@7-10.200.20.46:22-10.200.16.10:51196.service - OpenSSH per-connection server daemon (10.200.16.10:51196). Sep 12 17:46:42.136053 sshd[6356]: Accepted publickey for core from 10.200.16.10 port 51196 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:46:42.138846 sshd[6356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:46:42.143077 systemd-logind[1692]: New session 10 of user core. Sep 12 17:46:42.149602 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:46:42.552711 sshd[6356]: pam_unix(sshd:session): session closed for user core Sep 12 17:46:42.556942 systemd[1]: sshd@7-10.200.20.46:22-10.200.16.10:51196.service: Deactivated successfully. Sep 12 17:46:42.559297 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:46:42.560457 systemd-logind[1692]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:46:42.563616 systemd-logind[1692]: Removed session 10. Sep 12 17:46:47.627541 systemd[1]: Started sshd@8-10.200.20.46:22-10.200.16.10:51202.service - OpenSSH per-connection server daemon (10.200.16.10:51202). Sep 12 17:46:48.044071 sshd[6372]: Accepted publickey for core from 10.200.16.10 port 51202 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:46:48.045483 sshd[6372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:46:48.052262 systemd-logind[1692]: New session 11 of user core. Sep 12 17:46:48.061669 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:46:48.421505 sshd[6372]: pam_unix(sshd:session): session closed for user core Sep 12 17:46:48.428046 systemd[1]: sshd@8-10.200.20.46:22-10.200.16.10:51202.service: Deactivated successfully. Sep 12 17:46:48.432771 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:46:48.434116 systemd-logind[1692]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:46:48.435541 systemd-logind[1692]: Removed session 11. Sep 12 17:46:53.517526 systemd[1]: Started sshd@9-10.200.20.46:22-10.200.16.10:35586.service - OpenSSH per-connection server daemon (10.200.16.10:35586). Sep 12 17:46:53.976036 sshd[6415]: Accepted publickey for core from 10.200.16.10 port 35586 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:46:53.977948 sshd[6415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:46:53.982937 systemd-logind[1692]: New session 12 of user core. Sep 12 17:46:53.988444 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:46:54.378517 sshd[6415]: pam_unix(sshd:session): session closed for user core Sep 12 17:46:54.382150 systemd[1]: sshd@9-10.200.20.46:22-10.200.16.10:35586.service: Deactivated successfully. Sep 12 17:46:54.384448 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:46:54.385317 systemd-logind[1692]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:46:54.386191 systemd-logind[1692]: Removed session 12. Sep 12 17:46:54.451508 systemd[1]: Started sshd@10-10.200.20.46:22-10.200.16.10:35594.service - OpenSSH per-connection server daemon (10.200.16.10:35594). Sep 12 17:46:54.859187 sshd[6428]: Accepted publickey for core from 10.200.16.10 port 35594 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:46:54.860702 sshd[6428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:46:54.864763 systemd-logind[1692]: New session 13 of user core. Sep 12 17:46:54.871469 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:46:55.276296 sshd[6428]: pam_unix(sshd:session): session closed for user core Sep 12 17:46:55.283148 systemd[1]: sshd@10-10.200.20.46:22-10.200.16.10:35594.service: Deactivated successfully. Sep 12 17:46:55.286993 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:46:55.290210 systemd-logind[1692]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:46:55.291902 systemd-logind[1692]: Removed session 13. Sep 12 17:46:55.376545 systemd[1]: Started sshd@11-10.200.20.46:22-10.200.16.10:35598.service - OpenSSH per-connection server daemon (10.200.16.10:35598). Sep 12 17:46:55.870111 sshd[6441]: Accepted publickey for core from 10.200.16.10 port 35598 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:46:55.871509 sshd[6441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:46:55.876239 systemd-logind[1692]: New session 14 of user core. Sep 12 17:46:55.884406 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:46:56.285683 sshd[6441]: pam_unix(sshd:session): session closed for user core Sep 12 17:46:56.289139 systemd[1]: sshd@11-10.200.20.46:22-10.200.16.10:35598.service: Deactivated successfully. Sep 12 17:46:56.291425 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:46:56.292208 systemd-logind[1692]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:46:56.293100 systemd-logind[1692]: Removed session 14. Sep 12 17:46:57.156715 kubelet[3184]: I0912 17:46:57.156342 3184 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:47:01.394344 systemd[1]: Started sshd@12-10.200.20.46:22-10.200.16.10:37752.service - OpenSSH per-connection server daemon (10.200.16.10:37752). Sep 12 17:47:01.888356 sshd[6481]: Accepted publickey for core from 10.200.16.10 port 37752 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:47:01.889945 sshd[6481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:01.894872 systemd-logind[1692]: New session 15 of user core. Sep 12 17:47:01.897672 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:47:02.302194 sshd[6481]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:02.305796 systemd-logind[1692]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:47:02.306518 systemd[1]: sshd@12-10.200.20.46:22-10.200.16.10:37752.service: Deactivated successfully. Sep 12 17:47:02.309587 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:47:02.310812 systemd-logind[1692]: Removed session 15. Sep 12 17:47:07.381363 systemd[1]: Started sshd@13-10.200.20.46:22-10.200.16.10:37766.service - OpenSSH per-connection server daemon (10.200.16.10:37766). Sep 12 17:47:07.834123 sshd[6514]: Accepted publickey for core from 10.200.16.10 port 37766 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:47:07.835972 sshd[6514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:07.839831 systemd-logind[1692]: New session 16 of user core. Sep 12 17:47:07.845371 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:47:08.222332 sshd[6514]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:08.226023 systemd[1]: sshd@13-10.200.20.46:22-10.200.16.10:37766.service: Deactivated successfully. Sep 12 17:47:08.228181 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:47:08.228943 systemd-logind[1692]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:47:08.230579 systemd-logind[1692]: Removed session 16. Sep 12 17:47:13.317445 systemd[1]: Started sshd@14-10.200.20.46:22-10.200.16.10:50784.service - OpenSSH per-connection server daemon (10.200.16.10:50784). Sep 12 17:47:13.787718 sshd[6547]: Accepted publickey for core from 10.200.16.10 port 50784 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:47:13.789617 sshd[6547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:13.798068 systemd-logind[1692]: New session 17 of user core. Sep 12 17:47:13.802203 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:47:14.203834 sshd[6547]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:14.209184 systemd[1]: sshd@14-10.200.20.46:22-10.200.16.10:50784.service: Deactivated successfully. Sep 12 17:47:14.214170 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:47:14.218494 systemd-logind[1692]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:47:14.220712 systemd-logind[1692]: Removed session 17. Sep 12 17:47:14.305968 systemd[1]: Started sshd@15-10.200.20.46:22-10.200.16.10:50790.service - OpenSSH per-connection server daemon (10.200.16.10:50790). Sep 12 17:47:14.807147 sshd[6560]: Accepted publickey for core from 10.200.16.10 port 50790 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:47:14.809159 sshd[6560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:14.814739 systemd-logind[1692]: New session 18 of user core. Sep 12 17:47:14.822441 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:47:15.411355 sshd[6560]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:15.416580 systemd[1]: sshd@15-10.200.20.46:22-10.200.16.10:50790.service: Deactivated successfully. Sep 12 17:47:15.420978 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:47:15.425058 systemd-logind[1692]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:47:15.426431 systemd-logind[1692]: Removed session 18. Sep 12 17:47:15.485786 systemd[1]: Started sshd@16-10.200.20.46:22-10.200.16.10:50798.service - OpenSSH per-connection server daemon (10.200.16.10:50798). Sep 12 17:47:15.894648 sshd[6571]: Accepted publickey for core from 10.200.16.10 port 50798 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:47:15.896968 sshd[6571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:15.905494 systemd-logind[1692]: New session 19 of user core. Sep 12 17:47:15.912354 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:47:16.890661 sshd[6571]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:16.894883 systemd[1]: sshd@16-10.200.20.46:22-10.200.16.10:50798.service: Deactivated successfully. Sep 12 17:47:16.900716 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:47:16.903290 systemd-logind[1692]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:47:16.904549 systemd-logind[1692]: Removed session 19. Sep 12 17:47:16.969302 systemd[1]: Started sshd@17-10.200.20.46:22-10.200.16.10:50810.service - OpenSSH per-connection server daemon (10.200.16.10:50810). Sep 12 17:47:17.388167 sshd[6594]: Accepted publickey for core from 10.200.16.10 port 50810 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:47:17.394615 sshd[6594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:17.401825 systemd-logind[1692]: New session 20 of user core. Sep 12 17:47:17.406151 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:47:17.961357 sshd[6594]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:17.966892 systemd[1]: sshd@17-10.200.20.46:22-10.200.16.10:50810.service: Deactivated successfully. Sep 12 17:47:17.970779 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:47:17.971947 systemd-logind[1692]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:47:17.973172 systemd-logind[1692]: Removed session 20. Sep 12 17:47:18.040507 systemd[1]: Started sshd@18-10.200.20.46:22-10.200.16.10:50822.service - OpenSSH per-connection server daemon (10.200.16.10:50822). Sep 12 17:47:18.449320 sshd[6605]: Accepted publickey for core from 10.200.16.10 port 50822 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:47:18.450649 sshd[6605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:18.454546 systemd-logind[1692]: New session 21 of user core. Sep 12 17:47:18.460460 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:47:18.841477 sshd[6605]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:18.844398 systemd[1]: sshd@18-10.200.20.46:22-10.200.16.10:50822.service: Deactivated successfully. Sep 12 17:47:18.846164 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:47:18.849700 systemd-logind[1692]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:47:18.853636 systemd-logind[1692]: Removed session 21. Sep 12 17:47:23.922710 systemd[1]: Started sshd@19-10.200.20.46:22-10.200.16.10:54592.service - OpenSSH per-connection server daemon (10.200.16.10:54592). Sep 12 17:47:24.348738 sshd[6643]: Accepted publickey for core from 10.200.16.10 port 54592 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:47:24.349774 sshd[6643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:24.355863 systemd-logind[1692]: New session 22 of user core. Sep 12 17:47:24.361476 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:47:24.730427 sshd[6643]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:24.736869 systemd[1]: sshd@19-10.200.20.46:22-10.200.16.10:54592.service: Deactivated successfully. Sep 12 17:47:24.740381 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:47:24.742619 systemd-logind[1692]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:47:24.746016 systemd-logind[1692]: Removed session 22. Sep 12 17:47:25.489682 update_engine[1694]: I20250912 17:47:25.489357 1694 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 12 17:47:25.489682 update_engine[1694]: I20250912 17:47:25.489408 1694 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 12 17:47:25.489682 update_engine[1694]: I20250912 17:47:25.489629 1694 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 12 17:47:25.491652 update_engine[1694]: I20250912 17:47:25.491534 1694 omaha_request_params.cc:62] Current group set to lts Sep 12 17:47:25.491652 update_engine[1694]: I20250912 17:47:25.491639 1694 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 12 17:47:25.491652 update_engine[1694]: I20250912 17:47:25.491647 1694 update_attempter.cc:643] Scheduling an action processor start. Sep 12 17:47:25.491786 update_engine[1694]: I20250912 17:47:25.491664 1694 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 12 17:47:25.493112 update_engine[1694]: I20250912 17:47:25.492431 1694 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 12 17:47:25.493112 update_engine[1694]: I20250912 17:47:25.492759 1694 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 12 17:47:25.493112 update_engine[1694]: I20250912 17:47:25.492774 1694 omaha_request_action.cc:272] Request: Sep 12 17:47:25.493112 update_engine[1694]: Sep 12 17:47:25.493112 update_engine[1694]: Sep 12 17:47:25.493112 update_engine[1694]: Sep 12 17:47:25.493112 update_engine[1694]: Sep 12 17:47:25.493112 update_engine[1694]: Sep 12 17:47:25.493112 update_engine[1694]: Sep 12 17:47:25.493112 update_engine[1694]: Sep 12 17:47:25.493112 update_engine[1694]: Sep 12 17:47:25.493112 update_engine[1694]: I20250912 17:47:25.492781 1694 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:47:25.501094 update_engine[1694]: I20250912 17:47:25.501046 1694 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:47:25.501400 update_engine[1694]: I20250912 17:47:25.501372 1694 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:47:25.503171 locksmithd[1769]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 12 17:47:25.519755 update_engine[1694]: E20250912 17:47:25.519690 1694 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:47:25.519885 update_engine[1694]: I20250912 17:47:25.519788 1694 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 12 17:47:29.811491 systemd[1]: Started sshd@20-10.200.20.46:22-10.200.16.10:54604.service - OpenSSH per-connection server daemon (10.200.16.10:54604). Sep 12 17:47:30.219954 sshd[6664]: Accepted publickey for core from 10.200.16.10 port 54604 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:47:30.221367 sshd[6664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:30.227814 systemd-logind[1692]: New session 23 of user core. Sep 12 17:47:30.231467 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:47:30.648611 systemd[1]: run-containerd-runc-k8s.io-96611bb3e6a77e476830fdb7b99f695309dc650b44402a9dd34fc1d291661146-runc.7B6XSK.mount: Deactivated successfully. Sep 12 17:47:30.653048 sshd[6664]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:30.660692 systemd[1]: sshd@20-10.200.20.46:22-10.200.16.10:54604.service: Deactivated successfully. Sep 12 17:47:30.666931 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:47:30.668825 systemd-logind[1692]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:47:30.670064 systemd-logind[1692]: Removed session 23. Sep 12 17:47:35.492344 update_engine[1694]: I20250912 17:47:35.492269 1694 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:47:35.492733 update_engine[1694]: I20250912 17:47:35.492498 1694 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:47:35.492733 update_engine[1694]: I20250912 17:47:35.492722 1694 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:47:35.595481 update_engine[1694]: E20250912 17:47:35.595363 1694 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:47:35.595481 update_engine[1694]: I20250912 17:47:35.595454 1694 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 12 17:47:35.733658 systemd[1]: Started sshd@21-10.200.20.46:22-10.200.16.10:50046.service - OpenSSH per-connection server daemon (10.200.16.10:50046). Sep 12 17:47:36.154588 sshd[6698]: Accepted publickey for core from 10.200.16.10 port 50046 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:47:36.155941 sshd[6698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:36.159915 systemd-logind[1692]: New session 24 of user core. Sep 12 17:47:36.165395 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:47:36.526087 sshd[6698]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:36.531822 systemd[1]: sshd@21-10.200.20.46:22-10.200.16.10:50046.service: Deactivated successfully. Sep 12 17:47:36.534489 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:47:36.536357 systemd-logind[1692]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:47:36.537678 systemd-logind[1692]: Removed session 24. Sep 12 17:47:41.627871 systemd[1]: Started sshd@22-10.200.20.46:22-10.200.16.10:37516.service - OpenSSH per-connection server daemon (10.200.16.10:37516). Sep 12 17:47:42.123765 sshd[6774]: Accepted publickey for core from 10.200.16.10 port 37516 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:47:42.127751 sshd[6774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:42.135364 systemd-logind[1692]: New session 25 of user core. Sep 12 17:47:42.139687 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:47:42.580478 sshd[6774]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:42.585474 systemd[1]: sshd@22-10.200.20.46:22-10.200.16.10:37516.service: Deactivated successfully. Sep 12 17:47:42.591825 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:47:42.593075 systemd-logind[1692]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:47:42.594363 systemd-logind[1692]: Removed session 25. Sep 12 17:47:45.491309 update_engine[1694]: I20250912 17:47:45.490811 1694 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:47:45.491309 update_engine[1694]: I20250912 17:47:45.491037 1694 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:47:45.491780 update_engine[1694]: I20250912 17:47:45.491735 1694 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:47:45.601182 update_engine[1694]: E20250912 17:47:45.601042 1694 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:47:45.601182 update_engine[1694]: I20250912 17:47:45.601130 1694 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 12 17:47:47.667557 systemd[1]: Started sshd@23-10.200.20.46:22-10.200.16.10:37532.service - OpenSSH per-connection server daemon (10.200.16.10:37532). Sep 12 17:47:48.118247 sshd[6786]: Accepted publickey for core from 10.200.16.10 port 37532 ssh2: RSA SHA256:AyxZlAlMlcR5vugKaGkRn7Fc8dKoxnuuK/wm0HuJN6k Sep 12 17:47:48.119625 sshd[6786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:48.126570 systemd-logind[1692]: New session 26 of user core. Sep 12 17:47:48.130612 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 17:47:48.535283 sshd[6786]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:48.541632 systemd[1]: sshd@23-10.200.20.46:22-10.200.16.10:37532.service: Deactivated successfully. Sep 12 17:47:48.546426 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 17:47:48.547855 systemd-logind[1692]: Session 26 logged out. Waiting for processes to exit. Sep 12 17:47:48.549538 systemd-logind[1692]: Removed session 26.