Sep 12 17:21:31.037079 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Sep 12 17:21:31.037096 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Sep 12 15:37:01 -00 2025 Sep 12 17:21:31.037102 kernel: KASLR enabled Sep 12 17:21:31.037107 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 12 17:21:31.037111 kernel: printk: legacy bootconsole [pl11] enabled Sep 12 17:21:31.037115 kernel: efi: EFI v2.7 by EDK II Sep 12 17:21:31.037120 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e018 RNG=0x3fd5f998 MEMRESERVE=0x3e471598 Sep 12 17:21:31.037124 kernel: random: crng init done Sep 12 17:21:31.037128 kernel: secureboot: Secure boot disabled Sep 12 17:21:31.037132 kernel: ACPI: Early table checksum verification disabled Sep 12 17:21:31.037136 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 12 17:21:31.037140 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:21:31.037144 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:21:31.037150 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 12 17:21:31.037155 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:21:31.037159 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:21:31.037163 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:21:31.037167 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:21:31.037172 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:21:31.037176 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:21:31.037181 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 12 17:21:31.037185 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:21:31.037189 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 12 17:21:31.037193 kernel: ACPI: Use ACPI SPCR as default console: No Sep 12 17:21:31.037197 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Sep 12 17:21:31.037201 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Sep 12 17:21:31.037205 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Sep 12 17:21:31.037210 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Sep 12 17:21:31.037214 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Sep 12 17:21:31.037219 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Sep 12 17:21:31.037236 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Sep 12 17:21:31.037240 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Sep 12 17:21:31.037244 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Sep 12 17:21:31.037249 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Sep 12 17:21:31.037253 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Sep 12 17:21:31.037257 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Sep 12 17:21:31.037261 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Sep 12 17:21:31.037265 kernel: NODE_DATA(0) allocated [mem 0x1bf7fda00-0x1bf804fff] Sep 12 17:21:31.037269 kernel: Zone ranges: Sep 12 17:21:31.037274 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 12 17:21:31.037281 kernel: DMA32 empty Sep 12 17:21:31.037285 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 12 17:21:31.037290 kernel: Device empty Sep 12 17:21:31.037294 kernel: Movable zone start for each node Sep 12 17:21:31.037298 kernel: Early memory node ranges Sep 12 17:21:31.037304 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 12 17:21:31.037308 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Sep 12 17:21:31.037312 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Sep 12 17:21:31.037317 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Sep 12 17:21:31.037321 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 12 17:21:31.037325 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 12 17:21:31.037330 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 12 17:21:31.037334 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 12 17:21:31.037338 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 12 17:21:31.037343 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 12 17:21:31.037347 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 12 17:21:31.037351 kernel: cma: Reserved 16 MiB at 0x000000003d400000 on node -1 Sep 12 17:21:31.037357 kernel: psci: probing for conduit method from ACPI. Sep 12 17:21:31.037361 kernel: psci: PSCIv1.1 detected in firmware. Sep 12 17:21:31.037365 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 17:21:31.037370 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 12 17:21:31.037374 kernel: psci: SMC Calling Convention v1.4 Sep 12 17:21:31.037379 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Sep 12 17:21:31.037383 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Sep 12 17:21:31.037387 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 12 17:21:31.037392 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 12 17:21:31.037396 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 12 17:21:31.037401 kernel: Detected PIPT I-cache on CPU0 Sep 12 17:21:31.037406 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Sep 12 17:21:31.037410 kernel: CPU features: detected: GIC system register CPU interface Sep 12 17:21:31.037415 kernel: CPU features: detected: Spectre-v4 Sep 12 17:21:31.037419 kernel: CPU features: detected: Spectre-BHB Sep 12 17:21:31.037423 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 12 17:21:31.037428 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 12 17:21:31.037432 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Sep 12 17:21:31.037437 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 12 17:21:31.037441 kernel: alternatives: applying boot alternatives Sep 12 17:21:31.037446 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9b01894f6bb04aff3ec9b8554b3ae56a087d51961f1a01981bc4d4f54ccefc09 Sep 12 17:21:31.037451 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:21:31.037456 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:21:31.037461 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:21:31.037465 kernel: Fallback order for Node 0: 0 Sep 12 17:21:31.037470 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Sep 12 17:21:31.037474 kernel: Policy zone: Normal Sep 12 17:21:31.037478 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:21:31.037483 kernel: software IO TLB: area num 2. Sep 12 17:21:31.037487 kernel: software IO TLB: mapped [mem 0x0000000036290000-0x000000003a290000] (64MB) Sep 12 17:21:31.037491 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:21:31.037496 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:21:31.037501 kernel: rcu: RCU event tracing is enabled. Sep 12 17:21:31.037506 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:21:31.037511 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:21:31.037515 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:21:31.037520 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:21:31.037524 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:21:31.037528 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:21:31.037533 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:21:31.037537 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 17:21:31.037542 kernel: GICv3: 960 SPIs implemented Sep 12 17:21:31.037546 kernel: GICv3: 0 Extended SPIs implemented Sep 12 17:21:31.037550 kernel: Root IRQ handler: gic_handle_irq Sep 12 17:21:31.037555 kernel: GICv3: GICv3 features: 16 PPIs, RSS Sep 12 17:21:31.037560 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Sep 12 17:21:31.037564 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 12 17:21:31.037569 kernel: ITS: No ITS available, not enabling LPIs Sep 12 17:21:31.037573 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:21:31.037577 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Sep 12 17:21:31.037582 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 17:21:31.037586 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Sep 12 17:21:31.037591 kernel: Console: colour dummy device 80x25 Sep 12 17:21:31.037596 kernel: printk: legacy console [tty1] enabled Sep 12 17:21:31.037600 kernel: ACPI: Core revision 20240827 Sep 12 17:21:31.037605 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Sep 12 17:21:31.037610 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:21:31.037615 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 17:21:31.037619 kernel: landlock: Up and running. Sep 12 17:21:31.037624 kernel: SELinux: Initializing. Sep 12 17:21:31.037629 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:21:31.037636 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:21:31.037642 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 Sep 12 17:21:31.037647 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Sep 12 17:21:31.037651 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 12 17:21:31.037656 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:21:31.037661 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:21:31.037666 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 17:21:31.037671 kernel: Remapping and enabling EFI services. Sep 12 17:21:31.037676 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:21:31.037681 kernel: Detected PIPT I-cache on CPU1 Sep 12 17:21:31.037686 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 12 17:21:31.037691 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Sep 12 17:21:31.037696 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:21:31.037701 kernel: SMP: Total of 2 processors activated. Sep 12 17:21:31.037705 kernel: CPU: All CPU(s) started at EL1 Sep 12 17:21:31.037710 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 17:21:31.037715 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 12 17:21:31.037720 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 12 17:21:31.037725 kernel: CPU features: detected: Common not Private translations Sep 12 17:21:31.037729 kernel: CPU features: detected: CRC32 instructions Sep 12 17:21:31.037735 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Sep 12 17:21:31.037740 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 12 17:21:31.037745 kernel: CPU features: detected: LSE atomic instructions Sep 12 17:21:31.037750 kernel: CPU features: detected: Privileged Access Never Sep 12 17:21:31.037754 kernel: CPU features: detected: Speculation barrier (SB) Sep 12 17:21:31.037759 kernel: CPU features: detected: TLB range maintenance instructions Sep 12 17:21:31.037764 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 12 17:21:31.037769 kernel: CPU features: detected: Scalable Vector Extension Sep 12 17:21:31.037773 kernel: alternatives: applying system-wide alternatives Sep 12 17:21:31.037779 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Sep 12 17:21:31.037784 kernel: SVE: maximum available vector length 16 bytes per vector Sep 12 17:21:31.037788 kernel: SVE: default vector length 16 bytes per vector Sep 12 17:21:31.037793 kernel: Memory: 3959668K/4194160K available (11136K kernel code, 2440K rwdata, 9068K rodata, 38912K init, 1038K bss, 213304K reserved, 16384K cma-reserved) Sep 12 17:21:31.037798 kernel: devtmpfs: initialized Sep 12 17:21:31.037803 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:21:31.037808 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:21:31.037813 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 12 17:21:31.037817 kernel: 0 pages in range for non-PLT usage Sep 12 17:21:31.037823 kernel: 508576 pages in range for PLT usage Sep 12 17:21:31.037828 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:21:31.037832 kernel: SMBIOS 3.1.0 present. Sep 12 17:21:31.037837 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 12 17:21:31.037842 kernel: DMI: Memory slots populated: 2/2 Sep 12 17:21:31.037847 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:21:31.037852 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 17:21:31.037856 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 17:21:31.037861 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 17:21:31.037867 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:21:31.037872 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Sep 12 17:21:31.037877 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:21:31.037881 kernel: cpuidle: using governor menu Sep 12 17:21:31.037886 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 17:21:31.037891 kernel: ASID allocator initialised with 32768 entries Sep 12 17:21:31.037895 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:21:31.037900 kernel: Serial: AMBA PL011 UART driver Sep 12 17:21:31.037905 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:21:31.037911 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:21:31.037915 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 17:21:31.037920 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 17:21:31.037925 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:21:31.037930 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:21:31.037934 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 17:21:31.037939 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 17:21:31.037944 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:21:31.037948 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:21:31.037954 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:21:31.037959 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:21:31.037963 kernel: ACPI: Interpreter enabled Sep 12 17:21:31.037968 kernel: ACPI: Using GIC for interrupt routing Sep 12 17:21:31.037973 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 12 17:21:31.037978 kernel: printk: legacy console [ttyAMA0] enabled Sep 12 17:21:31.037982 kernel: printk: legacy bootconsole [pl11] disabled Sep 12 17:21:31.037987 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 12 17:21:31.037992 kernel: ACPI: CPU0 has been hot-added Sep 12 17:21:31.037997 kernel: ACPI: CPU1 has been hot-added Sep 12 17:21:31.038002 kernel: iommu: Default domain type: Translated Sep 12 17:21:31.038007 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 17:21:31.038012 kernel: efivars: Registered efivars operations Sep 12 17:21:31.038016 kernel: vgaarb: loaded Sep 12 17:21:31.038021 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 17:21:31.038026 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:21:31.038031 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:21:31.038035 kernel: pnp: PnP ACPI init Sep 12 17:21:31.038041 kernel: pnp: PnP ACPI: found 0 devices Sep 12 17:21:31.038046 kernel: NET: Registered PF_INET protocol family Sep 12 17:21:31.038051 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:21:31.038056 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:21:31.038060 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:21:31.038065 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:21:31.038070 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:21:31.038075 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:21:31.038079 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:21:31.038085 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:21:31.038090 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:21:31.038095 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:21:31.038099 kernel: kvm [1]: HYP mode not available Sep 12 17:21:31.038104 kernel: Initialise system trusted keyrings Sep 12 17:21:31.038109 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:21:31.038113 kernel: Key type asymmetric registered Sep 12 17:21:31.038118 kernel: Asymmetric key parser 'x509' registered Sep 12 17:21:31.038123 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 12 17:21:31.038128 kernel: io scheduler mq-deadline registered Sep 12 17:21:31.038133 kernel: io scheduler kyber registered Sep 12 17:21:31.038138 kernel: io scheduler bfq registered Sep 12 17:21:31.038143 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:21:31.038147 kernel: thunder_xcv, ver 1.0 Sep 12 17:21:31.038152 kernel: thunder_bgx, ver 1.0 Sep 12 17:21:31.038157 kernel: nicpf, ver 1.0 Sep 12 17:21:31.038161 kernel: nicvf, ver 1.0 Sep 12 17:21:31.039331 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 17:21:31.039400 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T17:21:30 UTC (1757697690) Sep 12 17:21:31.039406 kernel: efifb: probing for efifb Sep 12 17:21:31.039411 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 12 17:21:31.039416 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 12 17:21:31.039421 kernel: efifb: scrolling: redraw Sep 12 17:21:31.039426 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 17:21:31.039431 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 17:21:31.039436 kernel: fb0: EFI VGA frame buffer device Sep 12 17:21:31.039442 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 12 17:21:31.039447 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 17:21:31.039452 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 12 17:21:31.039456 kernel: watchdog: NMI not fully supported Sep 12 17:21:31.039461 kernel: watchdog: Hard watchdog permanently disabled Sep 12 17:21:31.039466 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:21:31.039471 kernel: Segment Routing with IPv6 Sep 12 17:21:31.039475 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:21:31.039480 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:21:31.039486 kernel: Key type dns_resolver registered Sep 12 17:21:31.039491 kernel: registered taskstats version 1 Sep 12 17:21:31.039496 kernel: Loading compiled-in X.509 certificates Sep 12 17:21:31.039501 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: 7675c1947f324bc6524fdc1ee0f8f5f343acfea7' Sep 12 17:21:31.039505 kernel: Demotion targets for Node 0: null Sep 12 17:21:31.039510 kernel: Key type .fscrypt registered Sep 12 17:21:31.039515 kernel: Key type fscrypt-provisioning registered Sep 12 17:21:31.039520 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:21:31.039524 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:21:31.039530 kernel: ima: No architecture policies found Sep 12 17:21:31.039535 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 17:21:31.039540 kernel: clk: Disabling unused clocks Sep 12 17:21:31.039545 kernel: PM: genpd: Disabling unused power domains Sep 12 17:21:31.039549 kernel: Warning: unable to open an initial console. Sep 12 17:21:31.039554 kernel: Freeing unused kernel memory: 38912K Sep 12 17:21:31.039559 kernel: Run /init as init process Sep 12 17:21:31.039564 kernel: with arguments: Sep 12 17:21:31.039569 kernel: /init Sep 12 17:21:31.039574 kernel: with environment: Sep 12 17:21:31.039579 kernel: HOME=/ Sep 12 17:21:31.039584 kernel: TERM=linux Sep 12 17:21:31.039588 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:21:31.039594 systemd[1]: Successfully made /usr/ read-only. Sep 12 17:21:31.039601 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:21:31.039607 systemd[1]: Detected virtualization microsoft. Sep 12 17:21:31.039612 systemd[1]: Detected architecture arm64. Sep 12 17:21:31.039617 systemd[1]: Running in initrd. Sep 12 17:21:31.039623 systemd[1]: No hostname configured, using default hostname. Sep 12 17:21:31.039628 systemd[1]: Hostname set to . Sep 12 17:21:31.039633 systemd[1]: Initializing machine ID from random generator. Sep 12 17:21:31.039638 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:21:31.039643 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:21:31.039648 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:21:31.039654 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:21:31.039660 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:21:31.039666 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:21:31.039671 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:21:31.039677 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:21:31.039682 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:21:31.039688 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:21:31.039694 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:21:31.039699 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:21:31.039704 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:21:31.039709 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:21:31.039714 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:21:31.039720 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:21:31.039725 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:21:31.039730 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:21:31.039735 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 17:21:31.039741 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:21:31.039746 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:21:31.039752 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:21:31.039757 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:21:31.039762 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:21:31.039767 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:21:31.039772 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:21:31.039778 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 17:21:31.039784 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:21:31.039789 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:21:31.039794 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:21:31.039811 systemd-journald[224]: Collecting audit messages is disabled. Sep 12 17:21:31.039824 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:21:31.039830 systemd-journald[224]: Journal started Sep 12 17:21:31.039844 systemd-journald[224]: Runtime Journal (/run/log/journal/6a745f335d1548439e5bb04e4d10e18e) is 8M, max 78.5M, 70.5M free. Sep 12 17:21:31.047906 systemd-modules-load[226]: Inserted module 'overlay' Sep 12 17:21:31.059636 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:21:31.069245 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:21:31.070089 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:21:31.085792 kernel: Bridge firewalling registered Sep 12 17:21:31.075833 systemd-modules-load[226]: Inserted module 'br_netfilter' Sep 12 17:21:31.077190 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:21:31.081837 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:21:31.093797 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:21:31.104234 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:21:31.111010 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:21:31.121621 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:21:31.137602 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:21:31.149504 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:21:31.164148 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:21:31.170525 systemd-tmpfiles[248]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 17:21:31.177239 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:21:31.185610 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:21:31.195381 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:21:31.207292 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:21:31.223947 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:21:31.233415 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:21:31.254556 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:21:31.265687 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9b01894f6bb04aff3ec9b8554b3ae56a087d51961f1a01981bc4d4f54ccefc09 Sep 12 17:21:31.296965 systemd-resolved[262]: Positive Trust Anchors: Sep 12 17:21:31.297739 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:21:31.297763 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:21:31.299660 systemd-resolved[262]: Defaulting to hostname 'linux'. Sep 12 17:21:31.305845 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:21:31.310589 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:21:31.404247 kernel: SCSI subsystem initialized Sep 12 17:21:31.411237 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:21:31.417251 kernel: iscsi: registered transport (tcp) Sep 12 17:21:31.430141 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:21:31.430177 kernel: QLogic iSCSI HBA Driver Sep 12 17:21:31.443122 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:21:31.463672 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:21:31.470222 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:21:31.520383 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:21:31.525916 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:21:31.592241 kernel: raid6: neonx8 gen() 18550 MB/s Sep 12 17:21:31.612232 kernel: raid6: neonx4 gen() 18552 MB/s Sep 12 17:21:31.631230 kernel: raid6: neonx2 gen() 17081 MB/s Sep 12 17:21:31.650231 kernel: raid6: neonx1 gen() 15083 MB/s Sep 12 17:21:31.670321 kernel: raid6: int64x8 gen() 10543 MB/s Sep 12 17:21:31.689322 kernel: raid6: int64x4 gen() 10618 MB/s Sep 12 17:21:31.708320 kernel: raid6: int64x2 gen() 8997 MB/s Sep 12 17:21:31.730344 kernel: raid6: int64x1 gen() 7040 MB/s Sep 12 17:21:31.730391 kernel: raid6: using algorithm neonx4 gen() 18552 MB/s Sep 12 17:21:31.752150 kernel: raid6: .... xor() 15136 MB/s, rmw enabled Sep 12 17:21:31.752205 kernel: raid6: using neon recovery algorithm Sep 12 17:21:31.759799 kernel: xor: measuring software checksum speed Sep 12 17:21:31.759807 kernel: 8regs : 28609 MB/sec Sep 12 17:21:31.762161 kernel: 32regs : 28806 MB/sec Sep 12 17:21:31.764494 kernel: arm64_neon : 37573 MB/sec Sep 12 17:21:31.767281 kernel: xor: using function: arm64_neon (37573 MB/sec) Sep 12 17:21:31.852312 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:21:31.858055 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:21:31.867144 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:21:31.894801 systemd-udevd[473]: Using default interface naming scheme 'v255'. Sep 12 17:21:31.899039 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:21:31.910669 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:21:31.933722 dracut-pre-trigger[485]: rd.md=0: removing MD RAID activation Sep 12 17:21:31.953599 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:21:31.960069 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:21:32.009362 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:21:32.017557 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:21:32.083986 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:21:32.084099 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:21:32.093583 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:21:32.114301 kernel: hv_vmbus: Vmbus version:5.3 Sep 12 17:21:32.099387 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:21:32.113483 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:21:32.140594 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 12 17:21:32.140623 kernel: hv_vmbus: registering driver hid_hyperv Sep 12 17:21:32.140631 kernel: hv_vmbus: registering driver hv_storvsc Sep 12 17:21:32.140645 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Sep 12 17:21:32.141264 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 12 17:21:32.148485 kernel: scsi host1: storvsc_host_t Sep 12 17:21:32.149243 kernel: hv_vmbus: registering driver hv_netvsc Sep 12 17:21:32.155243 kernel: scsi host0: storvsc_host_t Sep 12 17:21:32.155391 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Sep 12 17:21:32.168996 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 12 17:21:32.169035 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 12 17:21:32.163383 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:21:32.181977 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 12 17:21:32.186707 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Sep 12 17:21:32.193242 kernel: PTP clock support registered Sep 12 17:21:32.199879 kernel: hv_utils: Registering HyperV Utility Driver Sep 12 17:21:32.199905 kernel: hv_vmbus: registering driver hv_utils Sep 12 17:21:32.207116 kernel: hv_utils: Shutdown IC version 3.2 Sep 12 17:21:32.207148 kernel: hv_utils: Heartbeat IC version 3.0 Sep 12 17:21:32.207156 kernel: hv_utils: TimeSync IC version 4.0 Sep 12 17:21:32.256686 systemd-resolved[262]: Clock change detected. Flushing caches. Sep 12 17:21:32.276218 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 12 17:21:32.276381 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 12 17:21:32.283724 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 12 17:21:32.283843 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 12 17:21:32.283909 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 12 17:21:32.284486 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#257 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 12 17:21:32.296484 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 12 17:21:32.307634 kernel: hv_netvsc 000d3afb-4c57-000d-3afb-4c57000d3afb eth0: VF slot 1 added Sep 12 17:21:32.314722 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:21:32.314757 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 12 17:21:32.321802 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 12 17:21:32.321955 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 17:21:32.322483 kernel: hv_vmbus: registering driver hv_pci Sep 12 17:21:32.325512 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 12 17:21:32.325659 kernel: hv_pci 2f37b115-89e1-4091-8320-9702644ca4c0: PCI VMBus probing: Using version 0x10004 Sep 12 17:21:32.339826 kernel: hv_pci 2f37b115-89e1-4091-8320-9702644ca4c0: PCI host bridge to bus 89e1:00 Sep 12 17:21:32.339969 kernel: pci_bus 89e1:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 12 17:21:32.344561 kernel: pci_bus 89e1:00: No busn resource found for root bus, will use [bus 00-ff] Sep 12 17:21:32.353845 kernel: pci 89e1:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Sep 12 17:21:32.359970 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#263 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 12 17:21:32.364492 kernel: pci 89e1:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 12 17:21:32.368562 kernel: pci 89e1:00:02.0: enabling Extended Tags Sep 12 17:21:32.385821 kernel: pci 89e1:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 89e1:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Sep 12 17:21:32.385878 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#150 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 12 17:21:32.396485 kernel: pci_bus 89e1:00: busn_res: [bus 00-ff] end is updated to 00 Sep 12 17:21:32.401828 kernel: pci 89e1:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Sep 12 17:21:32.460909 kernel: mlx5_core 89e1:00:02.0: enabling device (0000 -> 0002) Sep 12 17:21:32.468571 kernel: mlx5_core 89e1:00:02.0: PTM is not supported by PCIe Sep 12 17:21:32.468675 kernel: mlx5_core 89e1:00:02.0: firmware version: 16.30.5006 Sep 12 17:21:32.651569 kernel: hv_netvsc 000d3afb-4c57-000d-3afb-4c57000d3afb eth0: VF registering: eth1 Sep 12 17:21:32.651766 kernel: mlx5_core 89e1:00:02.0 eth1: joined to eth0 Sep 12 17:21:32.656427 kernel: mlx5_core 89e1:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Sep 12 17:21:32.666487 kernel: mlx5_core 89e1:00:02.0 enP35297s1: renamed from eth1 Sep 12 17:21:32.925392 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 12 17:21:32.950543 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 12 17:21:32.983973 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 12 17:21:32.999657 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 12 17:21:33.004608 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 12 17:21:33.015318 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:21:33.024333 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:21:33.032286 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:21:33.040846 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:21:33.053612 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:21:33.068587 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:21:33.088208 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#260 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 12 17:21:33.090518 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:21:33.103575 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:21:34.114533 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#140 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 12 17:21:34.127501 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 17:21:34.128361 disk-uuid[654]: The operation has completed successfully. Sep 12 17:21:34.190011 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:21:34.192149 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:21:34.221905 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:21:34.239514 sh[818]: Success Sep 12 17:21:34.269979 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:21:34.270020 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:21:34.275510 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 17:21:34.282480 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 12 17:21:34.580906 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:21:34.588247 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:21:34.603233 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:21:34.623262 kernel: BTRFS: device fsid 752cb955-bdfa-486a-ad02-b54d5e61d194 devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (836) Sep 12 17:21:34.623296 kernel: BTRFS info (device dm-0): first mount of filesystem 752cb955-bdfa-486a-ad02-b54d5e61d194 Sep 12 17:21:34.627393 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:21:34.984192 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:21:34.984271 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 17:21:35.012795 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:21:35.016255 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:21:35.023569 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:21:35.024250 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:21:35.044006 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:21:35.074484 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (873) Sep 12 17:21:35.084939 kernel: BTRFS info (device sda6): first mount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:21:35.084980 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:21:35.135156 kernel: BTRFS info (device sda6): turning on async discard Sep 12 17:21:35.135206 kernel: BTRFS info (device sda6): enabling free space tree Sep 12 17:21:35.142274 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:21:35.147224 kernel: BTRFS info (device sda6): last unmount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:21:35.152636 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:21:35.157755 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:21:35.174431 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:21:35.198540 systemd-networkd[1005]: lo: Link UP Sep 12 17:21:35.198549 systemd-networkd[1005]: lo: Gained carrier Sep 12 17:21:35.199217 systemd-networkd[1005]: Enumeration completed Sep 12 17:21:35.201206 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:21:35.204107 systemd-networkd[1005]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:21:35.204110 systemd-networkd[1005]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:21:35.208224 systemd[1]: Reached target network.target - Network. Sep 12 17:21:35.275476 kernel: mlx5_core 89e1:00:02.0 enP35297s1: Link up Sep 12 17:21:35.275683 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 12 17:21:35.310490 kernel: hv_netvsc 000d3afb-4c57-000d-3afb-4c57000d3afb eth0: Data path switched to VF: enP35297s1 Sep 12 17:21:35.311137 systemd-networkd[1005]: enP35297s1: Link UP Sep 12 17:21:35.311220 systemd-networkd[1005]: eth0: Link UP Sep 12 17:21:35.311311 systemd-networkd[1005]: eth0: Gained carrier Sep 12 17:21:35.311323 systemd-networkd[1005]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:21:35.331829 systemd-networkd[1005]: enP35297s1: Gained carrier Sep 12 17:21:35.343590 systemd-networkd[1005]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 12 17:21:36.507719 ignition[1004]: Ignition 2.21.0 Sep 12 17:21:36.507731 ignition[1004]: Stage: fetch-offline Sep 12 17:21:36.511816 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:21:36.507800 ignition[1004]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:21:36.518957 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:21:36.507806 ignition[1004]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:21:36.507880 ignition[1004]: parsed url from cmdline: "" Sep 12 17:21:36.507882 ignition[1004]: no config URL provided Sep 12 17:21:36.507885 ignition[1004]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:21:36.507890 ignition[1004]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:21:36.507894 ignition[1004]: failed to fetch config: resource requires networking Sep 12 17:21:36.508014 ignition[1004]: Ignition finished successfully Sep 12 17:21:36.556233 ignition[1017]: Ignition 2.21.0 Sep 12 17:21:36.556247 ignition[1017]: Stage: fetch Sep 12 17:21:36.556387 ignition[1017]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:21:36.556394 ignition[1017]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:21:36.556458 ignition[1017]: parsed url from cmdline: "" Sep 12 17:21:36.556461 ignition[1017]: no config URL provided Sep 12 17:21:36.556487 ignition[1017]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:21:36.556492 ignition[1017]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:21:36.556519 ignition[1017]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 12 17:21:36.620424 ignition[1017]: GET result: OK Sep 12 17:21:36.620490 ignition[1017]: config has been read from IMDS userdata Sep 12 17:21:36.620514 ignition[1017]: parsing config with SHA512: 4cb60c53b8f1d30f0c4bf633a7703a4172288d4175026fa7c83b86bac966e1f429e326f4201acade90c05815d2e34da2998eb7f85eda0b8fe02ca4629b403e5c Sep 12 17:21:36.627575 unknown[1017]: fetched base config from "system" Sep 12 17:21:36.627581 unknown[1017]: fetched base config from "system" Sep 12 17:21:36.627793 ignition[1017]: fetch: fetch complete Sep 12 17:21:36.627585 unknown[1017]: fetched user config from "azure" Sep 12 17:21:36.627796 ignition[1017]: fetch: fetch passed Sep 12 17:21:36.631770 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:21:36.627825 ignition[1017]: Ignition finished successfully Sep 12 17:21:36.637439 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:21:36.668954 ignition[1023]: Ignition 2.21.0 Sep 12 17:21:36.671156 ignition[1023]: Stage: kargs Sep 12 17:21:36.671439 ignition[1023]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:21:36.675034 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:21:36.671448 ignition[1023]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:21:36.682318 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:21:36.672260 ignition[1023]: kargs: kargs passed Sep 12 17:21:36.672302 ignition[1023]: Ignition finished successfully Sep 12 17:21:36.709832 ignition[1029]: Ignition 2.21.0 Sep 12 17:21:36.712034 ignition[1029]: Stage: disks Sep 12 17:21:36.712487 ignition[1029]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:21:36.715709 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:21:36.712509 ignition[1029]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:21:36.722562 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:21:36.713194 ignition[1029]: disks: disks passed Sep 12 17:21:36.730152 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:21:36.713232 ignition[1029]: Ignition finished successfully Sep 12 17:21:36.738766 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:21:36.746514 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:21:36.752541 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:21:36.760991 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:21:36.775655 systemd-networkd[1005]: eth0: Gained IPv6LL Sep 12 17:21:36.846111 systemd-fsck[1037]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Sep 12 17:21:36.854446 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:21:36.860194 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:21:38.930484 kernel: EXT4-fs (sda9): mounted filesystem c902100c-52b7-422c-84ac-d834d4db2717 r/w with ordered data mode. Quota mode: none. Sep 12 17:21:38.934770 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:21:38.938036 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:21:38.999951 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:21:39.016046 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:21:39.021562 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 17:21:39.031844 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:21:39.031925 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:21:39.037323 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:21:39.051944 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:21:39.073487 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1051) Sep 12 17:21:39.083365 kernel: BTRFS info (device sda6): first mount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:21:39.083391 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:21:39.092115 kernel: BTRFS info (device sda6): turning on async discard Sep 12 17:21:39.092143 kernel: BTRFS info (device sda6): enabling free space tree Sep 12 17:21:39.093307 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:21:39.578432 coreos-metadata[1053]: Sep 12 17:21:39.578 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 12 17:21:39.584290 coreos-metadata[1053]: Sep 12 17:21:39.583 INFO Fetch successful Sep 12 17:21:39.584290 coreos-metadata[1053]: Sep 12 17:21:39.584 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 12 17:21:39.595854 coreos-metadata[1053]: Sep 12 17:21:39.595 INFO Fetch successful Sep 12 17:21:39.612148 coreos-metadata[1053]: Sep 12 17:21:39.611 INFO wrote hostname ci-4426.1.0-a-9410d45923 to /sysroot/etc/hostname Sep 12 17:21:39.617921 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:21:39.856201 initrd-setup-root[1081]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:21:39.900725 initrd-setup-root[1088]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:21:39.918419 initrd-setup-root[1095]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:21:39.923410 initrd-setup-root[1102]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:21:41.094793 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:21:41.100314 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:21:41.115974 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:21:41.124052 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:21:41.133495 kernel: BTRFS info (device sda6): last unmount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:21:41.150455 ignition[1174]: INFO : Ignition 2.21.0 Sep 12 17:21:41.150455 ignition[1174]: INFO : Stage: mount Sep 12 17:21:41.150455 ignition[1174]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:21:41.150455 ignition[1174]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:21:41.154044 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:21:41.180073 ignition[1174]: INFO : mount: mount passed Sep 12 17:21:41.180073 ignition[1174]: INFO : Ignition finished successfully Sep 12 17:21:41.161404 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:21:41.170086 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:21:41.191655 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:21:41.229729 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1185) Sep 12 17:21:41.229777 kernel: BTRFS info (device sda6): first mount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:21:41.234043 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:21:41.242830 kernel: BTRFS info (device sda6): turning on async discard Sep 12 17:21:41.242857 kernel: BTRFS info (device sda6): enabling free space tree Sep 12 17:21:41.244354 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:21:41.268568 ignition[1203]: INFO : Ignition 2.21.0 Sep 12 17:21:41.268568 ignition[1203]: INFO : Stage: files Sep 12 17:21:41.274506 ignition[1203]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:21:41.274506 ignition[1203]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:21:41.274506 ignition[1203]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:21:41.300963 ignition[1203]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:21:41.306305 ignition[1203]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:21:41.365061 ignition[1203]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:21:41.370305 ignition[1203]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:21:41.370305 ignition[1203]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:21:41.365393 unknown[1203]: wrote ssh authorized keys file for user: core Sep 12 17:21:41.406743 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 12 17:21:41.414262 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 12 17:21:41.442629 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:21:41.770769 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 12 17:21:41.770769 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:21:41.784253 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:21:41.784253 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:21:41.784253 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:21:41.784253 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:21:41.784253 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:21:41.784253 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:21:41.784253 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:21:41.829574 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:21:41.829574 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:21:41.829574 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 17:21:41.829574 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 17:21:41.829574 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 17:21:41.829574 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 12 17:21:42.356930 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:21:42.583895 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 17:21:42.583895 ignition[1203]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:21:42.640404 ignition[1203]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:21:42.653685 ignition[1203]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:21:42.661730 ignition[1203]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:21:42.661730 ignition[1203]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:21:42.661730 ignition[1203]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:21:42.661730 ignition[1203]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:21:42.661730 ignition[1203]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:21:42.661730 ignition[1203]: INFO : files: files passed Sep 12 17:21:42.661730 ignition[1203]: INFO : Ignition finished successfully Sep 12 17:21:42.662084 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:21:42.671357 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:21:42.701159 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:21:42.714123 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:21:42.737576 initrd-setup-root-after-ignition[1235]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:21:42.714194 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:21:42.758190 initrd-setup-root-after-ignition[1231]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:21:42.758190 initrd-setup-root-after-ignition[1231]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:21:42.729366 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:21:42.734647 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:21:42.743456 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:21:42.799804 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:21:42.800638 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:21:42.809723 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:21:42.817887 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:21:42.825528 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:21:42.826194 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:21:42.860510 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:21:42.866167 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:21:42.891325 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:21:42.895916 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:21:42.904280 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:21:42.911707 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:21:42.911803 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:21:42.922587 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:21:42.926649 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:21:42.934420 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:21:42.941877 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:21:42.950105 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:21:42.958564 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:21:42.967346 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:21:42.974918 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:21:42.983661 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:21:42.991046 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:21:42.999375 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:21:43.005919 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:21:43.006032 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:21:43.016407 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:21:43.020645 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:21:43.029023 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:21:43.036880 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:21:43.042256 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:21:43.042343 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:21:43.054140 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:21:43.054225 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:21:43.065038 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:21:43.065106 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:21:43.072430 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 17:21:43.072507 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:21:43.081446 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:21:43.095624 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:21:43.095732 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:21:43.146291 ignition[1256]: INFO : Ignition 2.21.0 Sep 12 17:21:43.146291 ignition[1256]: INFO : Stage: umount Sep 12 17:21:43.146291 ignition[1256]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:21:43.146291 ignition[1256]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:21:43.146291 ignition[1256]: INFO : umount: umount passed Sep 12 17:21:43.146291 ignition[1256]: INFO : Ignition finished successfully Sep 12 17:21:43.111142 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:21:43.118652 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:21:43.118794 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:21:43.131870 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:21:43.131951 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:21:43.147005 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:21:43.147074 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:21:43.154907 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:21:43.156691 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:21:43.165064 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:21:43.165116 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:21:43.174592 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:21:43.174628 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:21:43.180979 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:21:43.181007 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:21:43.188553 systemd[1]: Stopped target network.target - Network. Sep 12 17:21:43.195336 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:21:43.195373 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:21:43.205584 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:21:43.211241 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:21:43.214836 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:21:43.219855 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:21:43.227403 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:21:43.234836 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:21:43.234875 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:21:43.242019 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:21:43.242041 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:21:43.249794 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:21:43.249838 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:21:43.257768 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:21:43.257797 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:21:43.265711 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:21:43.272726 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:21:43.281389 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:21:43.282017 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:21:43.282088 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:21:43.294831 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:21:43.294920 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:21:43.306367 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 17:21:43.309828 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:21:43.309912 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:21:43.472679 kernel: hv_netvsc 000d3afb-4c57-000d-3afb-4c57000d3afb eth0: Data path switched from VF: enP35297s1 Sep 12 17:21:43.318619 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 17:21:43.319923 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 17:21:43.326268 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:21:43.326302 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:21:43.334390 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:21:43.334440 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:21:43.344575 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:21:43.353055 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:21:43.353106 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:21:43.361060 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:21:43.361106 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:21:43.371026 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:21:43.371061 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:21:43.376428 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:21:43.376494 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:21:43.386515 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:21:43.394725 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 17:21:43.394780 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:21:43.409272 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:21:43.414572 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:21:43.422693 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:21:43.422727 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:21:43.429730 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:21:43.429754 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:21:43.437260 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:21:43.437302 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:21:43.447568 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:21:43.447610 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:21:43.459575 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:21:43.459607 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:21:43.486625 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:21:43.499215 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 17:21:43.499282 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:21:43.511313 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:21:43.511355 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:21:43.517061 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:21:43.517100 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:21:43.522374 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 12 17:21:43.522414 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 17:21:43.522439 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:21:43.527200 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:21:43.527277 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:21:43.562590 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:21:43.698482 systemd-journald[224]: Received SIGTERM from PID 1 (systemd). Sep 12 17:21:43.562721 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:21:43.571722 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:21:43.580672 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:21:43.599336 systemd[1]: Switching root. Sep 12 17:21:43.711666 systemd-journald[224]: Journal stopped Sep 12 17:21:51.480403 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:21:51.480422 kernel: SELinux: policy capability open_perms=1 Sep 12 17:21:51.480430 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:21:51.480436 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:21:51.480443 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:21:51.480448 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:21:51.480454 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:21:51.480459 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:21:51.480618 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 17:21:51.480627 kernel: audit: type=1403 audit(1757697705.146:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:21:51.480637 systemd[1]: Successfully loaded SELinux policy in 232.245ms. Sep 12 17:21:51.480688 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.293ms. Sep 12 17:21:51.480696 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:21:51.480702 systemd[1]: Detected virtualization microsoft. Sep 12 17:21:51.480708 systemd[1]: Detected architecture arm64. Sep 12 17:21:51.480715 systemd[1]: Detected first boot. Sep 12 17:21:51.480722 systemd[1]: Hostname set to . Sep 12 17:21:51.480727 systemd[1]: Initializing machine ID from random generator. Sep 12 17:21:51.480733 zram_generator::config[1299]: No configuration found. Sep 12 17:21:51.480740 kernel: NET: Registered PF_VSOCK protocol family Sep 12 17:21:51.480746 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:21:51.480753 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 17:21:51.480759 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:21:51.480765 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:21:51.480771 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:21:51.480777 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:21:51.480783 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:21:51.480789 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:21:51.480795 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:21:51.480802 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:21:51.480808 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:21:51.480815 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:21:51.480821 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:21:51.480827 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:21:51.480832 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:21:51.480838 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:21:51.480844 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:21:51.480850 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:21:51.480857 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:21:51.480863 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 12 17:21:51.480871 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:21:51.480877 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:21:51.480883 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:21:51.480889 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:21:51.480895 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:21:51.480902 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:21:51.480908 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:21:51.480914 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:21:51.480920 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:21:51.480926 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:21:51.480932 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:21:51.480938 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:21:51.480947 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 17:21:51.480953 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:21:51.480959 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:21:51.480965 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:21:51.480971 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:21:51.480978 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:21:51.480985 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:21:51.480991 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:21:51.480997 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:21:51.481003 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:21:51.481009 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:21:51.481016 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:21:51.481022 systemd[1]: Reached target machines.target - Containers. Sep 12 17:21:51.481028 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:21:51.481035 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:21:51.481041 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:21:51.481047 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:21:51.481053 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:21:51.481059 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:21:51.481066 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:21:51.481072 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:21:51.481079 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:21:51.481085 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:21:51.481092 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:21:51.481099 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:21:51.481105 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:21:51.481111 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:21:51.481117 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:21:51.481124 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:21:51.481129 kernel: fuse: init (API version 7.41) Sep 12 17:21:51.481135 kernel: loop: module loaded Sep 12 17:21:51.481141 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:21:51.481148 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:21:51.481153 kernel: ACPI: bus type drm_connector registered Sep 12 17:21:51.481159 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:21:51.481166 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 17:21:51.481196 systemd-journald[1400]: Collecting audit messages is disabled. Sep 12 17:21:51.481213 systemd-journald[1400]: Journal started Sep 12 17:21:51.481228 systemd-journald[1400]: Runtime Journal (/run/log/journal/bb5f0a4ed33846ba82de27336ed3b5ed) is 8M, max 78.5M, 70.5M free. Sep 12 17:21:50.766378 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:21:50.777934 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 12 17:21:50.778307 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:21:50.778586 systemd[1]: systemd-journald.service: Consumed 2.176s CPU time. Sep 12 17:21:51.501941 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:21:51.501983 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:21:51.512352 systemd[1]: Stopped verity-setup.service. Sep 12 17:21:51.523039 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:21:51.523653 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:21:51.527717 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:21:51.532193 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:21:51.535850 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:21:51.540279 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:21:51.544538 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:21:51.548304 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:21:51.553166 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:21:51.558254 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:21:51.558384 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:21:51.563126 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:21:51.563248 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:21:51.567714 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:21:51.567830 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:21:51.572271 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:21:51.572377 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:21:51.577298 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:21:51.577414 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:21:51.581748 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:21:51.581863 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:21:51.586369 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:21:51.590945 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:21:51.595999 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:21:51.601458 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 17:21:51.606684 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:21:51.620282 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:21:51.625552 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:21:51.634427 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:21:51.638721 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:21:51.638747 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:21:51.643290 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 17:21:51.649779 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:21:51.653705 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:21:51.661053 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:21:51.666312 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:21:51.670598 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:21:51.673442 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:21:51.677473 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:21:51.678153 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:21:51.696125 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:21:51.702536 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:21:51.710392 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:21:51.716001 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:21:51.720836 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:21:51.727025 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:21:51.732660 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 17:21:51.741725 systemd-journald[1400]: Time spent on flushing to /var/log/journal/bb5f0a4ed33846ba82de27336ed3b5ed is 8.428ms for 939 entries. Sep 12 17:21:51.741725 systemd-journald[1400]: System Journal (/var/log/journal/bb5f0a4ed33846ba82de27336ed3b5ed) is 8M, max 2.6G, 2.6G free. Sep 12 17:21:51.764593 systemd-journald[1400]: Received client request to flush runtime journal. Sep 12 17:21:51.765650 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:21:51.796298 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:21:51.796852 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 17:21:51.807855 kernel: loop0: detected capacity change from 0 to 100608 Sep 12 17:21:51.808505 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:21:52.342920 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:21:52.353606 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:21:52.357724 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:21:52.423488 kernel: loop1: detected capacity change from 0 to 29264 Sep 12 17:21:52.497160 systemd-tmpfiles[1454]: ACLs are not supported, ignoring. Sep 12 17:21:52.497173 systemd-tmpfiles[1454]: ACLs are not supported, ignoring. Sep 12 17:21:52.499945 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:21:53.003693 kernel: loop2: detected capacity change from 0 to 207008 Sep 12 17:21:53.073492 kernel: loop3: detected capacity change from 0 to 119320 Sep 12 17:21:53.368404 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:21:53.374914 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:21:53.400033 systemd-udevd[1460]: Using default interface naming scheme 'v255'. Sep 12 17:21:53.545488 kernel: loop4: detected capacity change from 0 to 100608 Sep 12 17:21:53.557493 kernel: loop5: detected capacity change from 0 to 29264 Sep 12 17:21:53.569482 kernel: loop6: detected capacity change from 0 to 207008 Sep 12 17:21:53.585491 kernel: loop7: detected capacity change from 0 to 119320 Sep 12 17:21:53.593641 (sd-merge)[1462]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 12 17:21:53.594029 (sd-merge)[1462]: Merged extensions into '/usr'. Sep 12 17:21:53.597341 systemd[1]: Reload requested from client PID 1439 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:21:53.597562 systemd[1]: Reloading... Sep 12 17:21:53.643494 zram_generator::config[1487]: No configuration found. Sep 12 17:21:53.790925 systemd[1]: Reloading finished in 192 ms. Sep 12 17:21:53.813156 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:21:53.823406 systemd[1]: Starting ensure-sysext.service... Sep 12 17:21:53.828596 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:21:53.857820 systemd[1]: Reload requested from client PID 1543 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:21:53.857940 systemd[1]: Reloading... Sep 12 17:21:53.881687 systemd-tmpfiles[1544]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 17:21:53.881715 systemd-tmpfiles[1544]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 17:21:53.882670 systemd-tmpfiles[1544]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:21:53.882817 systemd-tmpfiles[1544]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:21:53.883244 systemd-tmpfiles[1544]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:21:53.883387 systemd-tmpfiles[1544]: ACLs are not supported, ignoring. Sep 12 17:21:53.883416 systemd-tmpfiles[1544]: ACLs are not supported, ignoring. Sep 12 17:21:53.906515 zram_generator::config[1571]: No configuration found. Sep 12 17:21:53.939408 systemd-tmpfiles[1544]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:21:53.939421 systemd-tmpfiles[1544]: Skipping /boot Sep 12 17:21:53.945444 systemd-tmpfiles[1544]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:21:53.945455 systemd-tmpfiles[1544]: Skipping /boot Sep 12 17:21:54.041291 systemd[1]: Reloading finished in 183 ms. Sep 12 17:21:54.053050 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:21:54.070605 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:21:54.133095 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:21:54.144258 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:21:54.155475 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:21:54.175813 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:21:54.183614 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:21:54.203002 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:21:54.206704 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:21:54.212725 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:21:54.220628 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:21:54.226718 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:21:54.226960 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:21:54.228756 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:21:54.238641 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:21:54.251447 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Sep 12 17:21:54.260227 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:21:54.263310 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:21:54.269866 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:21:54.270027 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:21:54.270134 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:21:54.278555 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:21:54.286253 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:21:54.286779 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:21:54.293097 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:21:54.294027 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:21:54.299969 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:21:54.300513 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:21:54.307115 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:21:54.307284 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:21:54.321299 systemd[1]: Finished ensure-sysext.service. Sep 12 17:21:54.330407 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 12 17:21:54.330666 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:21:54.330717 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:21:54.376654 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#167 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 12 17:21:54.391406 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:21:54.401231 augenrules[1712]: No rules Sep 12 17:21:54.403664 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:21:54.404117 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:21:54.452349 kernel: hv_vmbus: registering driver hv_balloon Sep 12 17:21:54.452429 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:21:54.452442 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 12 17:21:54.452454 kernel: hv_balloon: Memory hot add disabled on ARM64 Sep 12 17:21:54.453134 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Sep 12 17:21:54.493107 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:21:54.503302 kernel: hv_vmbus: registering driver hyperv_fb Sep 12 17:21:54.503375 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 12 17:21:54.503012 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:21:54.507493 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 12 17:21:54.515857 kernel: Console: switching to colour dummy device 80x25 Sep 12 17:21:54.525373 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 17:21:54.540511 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:21:54.542523 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:21:54.552061 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:21:54.567148 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:21:54.567296 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:21:54.574748 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:21:54.684370 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 12 17:21:54.691568 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:21:54.703056 systemd-resolved[1665]: Positive Trust Anchors: Sep 12 17:21:54.703429 systemd-resolved[1665]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:21:54.703531 systemd-resolved[1665]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:21:54.749650 systemd-networkd[1679]: lo: Link UP Sep 12 17:21:54.749656 systemd-networkd[1679]: lo: Gained carrier Sep 12 17:21:54.750666 systemd-networkd[1679]: Enumeration completed Sep 12 17:21:54.750770 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:21:54.752767 systemd-networkd[1679]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:21:54.752775 systemd-networkd[1679]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:21:54.756474 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 17:21:54.762202 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:21:54.775177 systemd-resolved[1665]: Using system hostname 'ci-4426.1.0-a-9410d45923'. Sep 12 17:21:54.779494 kernel: MACsec IEEE 802.1AE Sep 12 17:21:54.780542 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:21:54.819478 kernel: mlx5_core 89e1:00:02.0 enP35297s1: Link up Sep 12 17:21:54.823495 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 12 17:21:54.847630 kernel: hv_netvsc 000d3afb-4c57-000d-3afb-4c57000d3afb eth0: Data path switched to VF: enP35297s1 Sep 12 17:21:54.848843 systemd-networkd[1679]: enP35297s1: Link UP Sep 12 17:21:54.849038 systemd-networkd[1679]: eth0: Link UP Sep 12 17:21:54.849040 systemd-networkd[1679]: eth0: Gained carrier Sep 12 17:21:54.849061 systemd-networkd[1679]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:21:54.849692 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:21:54.854427 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 17:21:54.860620 systemd[1]: Reached target network.target - Network. Sep 12 17:21:54.864830 systemd-networkd[1679]: enP35297s1: Gained carrier Sep 12 17:21:54.865738 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:21:54.881530 systemd-networkd[1679]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 12 17:21:55.905508 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:21:56.039613 systemd-networkd[1679]: eth0: Gained IPv6LL Sep 12 17:21:56.041562 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:21:56.046836 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:21:56.772630 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:21:56.777832 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:22:00.368887 ldconfig[1433]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:22:00.386765 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:22:00.394081 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:22:00.437303 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:22:00.442043 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:22:00.446599 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:22:00.451759 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:22:00.456745 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:22:00.461001 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:22:00.466214 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:22:00.471227 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:22:00.471252 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:22:00.474773 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:22:00.510385 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:22:00.515724 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:22:00.520914 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 17:22:00.526069 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 17:22:00.530983 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 17:22:00.536602 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:22:00.564008 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 17:22:00.569117 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:22:00.574155 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:22:00.578624 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:22:00.582915 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:22:00.582941 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:22:00.612811 systemd[1]: Starting chronyd.service - NTP client/server... Sep 12 17:22:00.624561 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:22:00.632598 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:22:00.642895 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:22:00.650656 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:22:00.664579 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:22:00.670610 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:22:00.674850 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:22:00.675908 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 12 17:22:00.682706 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 12 17:22:00.682405 KVP[1834]: KVP starting; pid is:1834 Sep 12 17:22:00.683583 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:22:00.683849 chronyd[1824]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Sep 12 17:22:00.692732 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:22:00.694996 jq[1832]: false Sep 12 17:22:00.699998 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:22:00.704865 KVP[1834]: KVP LIC Version: 3.1 Sep 12 17:22:00.707601 kernel: hv_utils: KVP IC version 4.0 Sep 12 17:22:00.714157 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:22:00.720976 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:22:00.729512 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:22:00.743270 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:22:00.750014 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:22:00.751192 extend-filesystems[1833]: Found /dev/sda6 Sep 12 17:22:00.750712 chronyd[1824]: Timezone right/UTC failed leap second check, ignoring Sep 12 17:22:00.756844 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:22:00.750926 chronyd[1824]: Loaded seccomp filter (level 2) Sep 12 17:22:00.758748 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:22:00.769333 extend-filesystems[1833]: Found /dev/sda9 Sep 12 17:22:00.775353 extend-filesystems[1833]: Checking size of /dev/sda9 Sep 12 17:22:00.782077 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:22:00.790048 systemd[1]: Started chronyd.service - NTP client/server. Sep 12 17:22:00.801532 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:22:00.810851 jq[1855]: true Sep 12 17:22:00.812305 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:22:00.812677 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:22:00.819023 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:22:00.819232 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:22:00.828207 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:22:00.834120 extend-filesystems[1833]: Old size kept for /dev/sda9 Sep 12 17:22:00.836222 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:22:00.840658 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:22:00.848186 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:22:00.848644 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:22:00.875333 (ntainerd)[1875]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:22:00.878474 update_engine[1850]: I20250912 17:22:00.878118 1850 main.cc:92] Flatcar Update Engine starting Sep 12 17:22:00.880166 jq[1874]: true Sep 12 17:22:00.898796 systemd-logind[1846]: New seat seat0. Sep 12 17:22:00.900727 systemd-logind[1846]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Sep 12 17:22:00.900885 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:22:00.976759 tar[1868]: linux-arm64/LICENSE Sep 12 17:22:00.976759 tar[1868]: linux-arm64/helm Sep 12 17:22:01.019276 bash[1926]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:22:01.022505 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:22:01.026683 sshd_keygen[1864]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:22:01.032151 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 17:22:01.069817 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:22:01.077919 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:22:01.083028 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 12 17:22:01.121403 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:22:01.125702 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:22:01.137780 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 12 17:22:01.149713 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:22:01.183327 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:22:01.193272 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:22:01.202440 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 12 17:22:01.210883 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:22:01.297942 dbus-daemon[1830]: [system] SELinux support is enabled Sep 12 17:22:01.298132 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:22:01.304063 update_engine[1850]: I20250912 17:22:01.303966 1850 update_check_scheduler.cc:74] Next update check in 9m26s Sep 12 17:22:01.306448 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:22:01.306847 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:22:01.308246 dbus-daemon[1830]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 17:22:01.313749 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:22:01.313764 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:22:01.322403 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:22:01.331541 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:22:01.348563 tar[1868]: linux-arm64/README.md Sep 12 17:22:01.360412 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:22:01.380744 coreos-metadata[1826]: Sep 12 17:22:01.380 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 12 17:22:01.384706 coreos-metadata[1826]: Sep 12 17:22:01.384 INFO Fetch successful Sep 12 17:22:01.385524 coreos-metadata[1826]: Sep 12 17:22:01.385 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 12 17:22:01.390600 coreos-metadata[1826]: Sep 12 17:22:01.390 INFO Fetch successful Sep 12 17:22:01.390992 coreos-metadata[1826]: Sep 12 17:22:01.390 INFO Fetching http://168.63.129.16/machine/6e886e66-d5a0-4ad9-b9ca-02208b6fbbee/e78efaa7%2D6dd7%2D491c%2Db222%2Dd00cfe5c6745.%5Fci%2D4426.1.0%2Da%2D9410d45923?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 12 17:22:01.393276 coreos-metadata[1826]: Sep 12 17:22:01.393 INFO Fetch successful Sep 12 17:22:01.393367 coreos-metadata[1826]: Sep 12 17:22:01.393 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 12 17:22:01.403231 coreos-metadata[1826]: Sep 12 17:22:01.403 INFO Fetch successful Sep 12 17:22:01.433514 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:22:01.438965 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:22:01.528431 locksmithd[2004]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:22:01.629624 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:22:01.645581 containerd[1875]: time="2025-09-12T17:22:01Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 17:22:01.646448 containerd[1875]: time="2025-09-12T17:22:01.646311968Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 17:22:01.652494 containerd[1875]: time="2025-09-12T17:22:01.652233192Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.984µs" Sep 12 17:22:01.652494 containerd[1875]: time="2025-09-12T17:22:01.652262592Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 17:22:01.652494 containerd[1875]: time="2025-09-12T17:22:01.652277288Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 17:22:01.652494 containerd[1875]: time="2025-09-12T17:22:01.652414064Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 17:22:01.652494 containerd[1875]: time="2025-09-12T17:22:01.652425384Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 17:22:01.652494 containerd[1875]: time="2025-09-12T17:22:01.652442752Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:22:01.652681 containerd[1875]: time="2025-09-12T17:22:01.652661832Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:22:01.652726 containerd[1875]: time="2025-09-12T17:22:01.652714408Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:22:01.652969 containerd[1875]: time="2025-09-12T17:22:01.652946688Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:22:01.653035 containerd[1875]: time="2025-09-12T17:22:01.653024896Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:22:01.653502 containerd[1875]: time="2025-09-12T17:22:01.653069224Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:22:01.653502 containerd[1875]: time="2025-09-12T17:22:01.653078600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 17:22:01.653502 containerd[1875]: time="2025-09-12T17:22:01.653161760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 17:22:01.653502 containerd[1875]: time="2025-09-12T17:22:01.653321152Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:22:01.653502 containerd[1875]: time="2025-09-12T17:22:01.653342848Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:22:01.653502 containerd[1875]: time="2025-09-12T17:22:01.653349168Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 17:22:01.653502 containerd[1875]: time="2025-09-12T17:22:01.653383072Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 17:22:01.653818 containerd[1875]: time="2025-09-12T17:22:01.653795896Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 17:22:01.653934 containerd[1875]: time="2025-09-12T17:22:01.653919496Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:22:01.669535 containerd[1875]: time="2025-09-12T17:22:01.669493536Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 17:22:01.669696 containerd[1875]: time="2025-09-12T17:22:01.669680048Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 17:22:01.669747 containerd[1875]: time="2025-09-12T17:22:01.669738216Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 17:22:01.669815 containerd[1875]: time="2025-09-12T17:22:01.669802304Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 17:22:01.670363 containerd[1875]: time="2025-09-12T17:22:01.669862488Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 17:22:01.670363 containerd[1875]: time="2025-09-12T17:22:01.669876232Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 17:22:01.670363 containerd[1875]: time="2025-09-12T17:22:01.669884992Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 17:22:01.670363 containerd[1875]: time="2025-09-12T17:22:01.669893192Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 17:22:01.670363 containerd[1875]: time="2025-09-12T17:22:01.669902208Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 17:22:01.670363 containerd[1875]: time="2025-09-12T17:22:01.669908856Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 17:22:01.670363 containerd[1875]: time="2025-09-12T17:22:01.669916000Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 17:22:01.670363 containerd[1875]: time="2025-09-12T17:22:01.669925056Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 17:22:01.670363 containerd[1875]: time="2025-09-12T17:22:01.670059672Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 17:22:01.670363 containerd[1875]: time="2025-09-12T17:22:01.670074808Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 17:22:01.670363 containerd[1875]: time="2025-09-12T17:22:01.670085432Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 17:22:01.670363 containerd[1875]: time="2025-09-12T17:22:01.670093456Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 17:22:01.670363 containerd[1875]: time="2025-09-12T17:22:01.670110816Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 17:22:01.670363 containerd[1875]: time="2025-09-12T17:22:01.670117944Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 17:22:01.670595 containerd[1875]: time="2025-09-12T17:22:01.670125288Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 17:22:01.670595 containerd[1875]: time="2025-09-12T17:22:01.670132136Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 17:22:01.670595 containerd[1875]: time="2025-09-12T17:22:01.670140008Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 17:22:01.670595 containerd[1875]: time="2025-09-12T17:22:01.670148592Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 17:22:01.670595 containerd[1875]: time="2025-09-12T17:22:01.670159176Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 17:22:01.670595 containerd[1875]: time="2025-09-12T17:22:01.670214272Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 17:22:01.670595 containerd[1875]: time="2025-09-12T17:22:01.670224912Z" level=info msg="Start snapshots syncer" Sep 12 17:22:01.670595 containerd[1875]: time="2025-09-12T17:22:01.670255472Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 17:22:01.670694 containerd[1875]: time="2025-09-12T17:22:01.670448400Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 17:22:01.670694 containerd[1875]: time="2025-09-12T17:22:01.670508816Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 17:22:01.670769 containerd[1875]: time="2025-09-12T17:22:01.670568064Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 17:22:01.670769 containerd[1875]: time="2025-09-12T17:22:01.670666216Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 17:22:01.670769 containerd[1875]: time="2025-09-12T17:22:01.670683120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 17:22:01.670769 containerd[1875]: time="2025-09-12T17:22:01.670691712Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 17:22:01.670769 containerd[1875]: time="2025-09-12T17:22:01.670698464Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 17:22:01.670769 containerd[1875]: time="2025-09-12T17:22:01.670706064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 17:22:01.670769 containerd[1875]: time="2025-09-12T17:22:01.670712856Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 17:22:01.670769 containerd[1875]: time="2025-09-12T17:22:01.670719792Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 17:22:01.670769 containerd[1875]: time="2025-09-12T17:22:01.670737088Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 17:22:01.670769 containerd[1875]: time="2025-09-12T17:22:01.670744696Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 17:22:01.670769 containerd[1875]: time="2025-09-12T17:22:01.670757720Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 17:22:01.670892 containerd[1875]: time="2025-09-12T17:22:01.670779792Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:22:01.670892 containerd[1875]: time="2025-09-12T17:22:01.670789704Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:22:01.670892 containerd[1875]: time="2025-09-12T17:22:01.670795008Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:22:01.670892 containerd[1875]: time="2025-09-12T17:22:01.670800400Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:22:01.670892 containerd[1875]: time="2025-09-12T17:22:01.670804984Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 17:22:01.670892 containerd[1875]: time="2025-09-12T17:22:01.670810824Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 17:22:01.670892 containerd[1875]: time="2025-09-12T17:22:01.670817368Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 17:22:01.670892 containerd[1875]: time="2025-09-12T17:22:01.670829544Z" level=info msg="runtime interface created" Sep 12 17:22:01.670892 containerd[1875]: time="2025-09-12T17:22:01.670832688Z" level=info msg="created NRI interface" Sep 12 17:22:01.670892 containerd[1875]: time="2025-09-12T17:22:01.670837664Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 17:22:01.670892 containerd[1875]: time="2025-09-12T17:22:01.670845696Z" level=info msg="Connect containerd service" Sep 12 17:22:01.670892 containerd[1875]: time="2025-09-12T17:22:01.670870744Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:22:01.671482 containerd[1875]: time="2025-09-12T17:22:01.671441264Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:22:01.870973 (kubelet)[2022]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:22:02.214503 kubelet[2022]: E0912 17:22:02.214435 2022 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:22:02.216549 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:22:02.216780 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:22:02.217280 systemd[1]: kubelet.service: Consumed 543ms CPU time, 257.5M memory peak. Sep 12 17:22:02.370641 containerd[1875]: time="2025-09-12T17:22:02.370492768Z" level=info msg="Start subscribing containerd event" Sep 12 17:22:02.370641 containerd[1875]: time="2025-09-12T17:22:02.370561256Z" level=info msg="Start recovering state" Sep 12 17:22:02.370781 containerd[1875]: time="2025-09-12T17:22:02.370660984Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:22:02.371006 containerd[1875]: time="2025-09-12T17:22:02.370816464Z" level=info msg="Start event monitor" Sep 12 17:22:02.371006 containerd[1875]: time="2025-09-12T17:22:02.370836192Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:22:02.371006 containerd[1875]: time="2025-09-12T17:22:02.370843088Z" level=info msg="Start streaming server" Sep 12 17:22:02.371006 containerd[1875]: time="2025-09-12T17:22:02.370851016Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 17:22:02.371006 containerd[1875]: time="2025-09-12T17:22:02.370856640Z" level=info msg="runtime interface starting up..." Sep 12 17:22:02.371006 containerd[1875]: time="2025-09-12T17:22:02.370857288Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:22:02.371006 containerd[1875]: time="2025-09-12T17:22:02.370862936Z" level=info msg="starting plugins..." Sep 12 17:22:02.371006 containerd[1875]: time="2025-09-12T17:22:02.370891104Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 17:22:02.371006 containerd[1875]: time="2025-09-12T17:22:02.370975840Z" level=info msg="containerd successfully booted in 0.725736s" Sep 12 17:22:02.371111 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:22:02.376707 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:22:02.386528 systemd[1]: Startup finished in 1.594s (kernel) + 14.256s (initrd) + 17.471s (userspace) = 33.322s. Sep 12 17:22:02.985369 login[1995]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:22:02.986852 login[1996]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:22:02.997122 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:22:02.997260 systemd-logind[1846]: New session 2 of user core. Sep 12 17:22:02.999136 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:22:03.001796 systemd-logind[1846]: New session 1 of user core. Sep 12 17:22:03.041571 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:22:03.043742 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:22:03.069568 (systemd)[2049]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:22:03.071531 systemd-logind[1846]: New session c1 of user core. Sep 12 17:22:03.331439 systemd[2049]: Queued start job for default target default.target. Sep 12 17:22:03.340505 systemd[2049]: Created slice app.slice - User Application Slice. Sep 12 17:22:03.340739 systemd[2049]: Reached target paths.target - Paths. Sep 12 17:22:03.340831 systemd[2049]: Reached target timers.target - Timers. Sep 12 17:22:03.341932 systemd[2049]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:22:03.350375 systemd[2049]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:22:03.350657 systemd[2049]: Reached target sockets.target - Sockets. Sep 12 17:22:03.350776 systemd[2049]: Reached target basic.target - Basic System. Sep 12 17:22:03.350920 systemd[2049]: Reached target default.target - Main User Target. Sep 12 17:22:03.351006 systemd[2049]: Startup finished in 274ms. Sep 12 17:22:03.351084 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:22:03.352019 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:22:03.352477 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:22:03.411606 waagent[1991]: 2025-09-12T17:22:03.407775Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Sep 12 17:22:03.411894 waagent[1991]: 2025-09-12T17:22:03.411746Z INFO Daemon Daemon OS: flatcar 4426.1.0 Sep 12 17:22:03.415223 waagent[1991]: 2025-09-12T17:22:03.415189Z INFO Daemon Daemon Python: 3.11.13 Sep 12 17:22:03.419567 waagent[1991]: 2025-09-12T17:22:03.419520Z INFO Daemon Daemon Run daemon Sep 12 17:22:03.422340 waagent[1991]: 2025-09-12T17:22:03.422310Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4426.1.0' Sep 12 17:22:03.428922 waagent[1991]: 2025-09-12T17:22:03.428671Z INFO Daemon Daemon Using waagent for provisioning Sep 12 17:22:03.432649 waagent[1991]: 2025-09-12T17:22:03.432428Z INFO Daemon Daemon Activate resource disk Sep 12 17:22:03.436621 waagent[1991]: 2025-09-12T17:22:03.436578Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 12 17:22:03.445314 waagent[1991]: 2025-09-12T17:22:03.445217Z INFO Daemon Daemon Found device: None Sep 12 17:22:03.448418 waagent[1991]: 2025-09-12T17:22:03.448380Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 12 17:22:03.455084 waagent[1991]: 2025-09-12T17:22:03.455047Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 12 17:22:03.463979 waagent[1991]: 2025-09-12T17:22:03.463938Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 12 17:22:03.467877 waagent[1991]: 2025-09-12T17:22:03.467846Z INFO Daemon Daemon Running default provisioning handler Sep 12 17:22:03.476181 waagent[1991]: 2025-09-12T17:22:03.476139Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 12 17:22:03.485684 waagent[1991]: 2025-09-12T17:22:03.485647Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 12 17:22:03.492668 waagent[1991]: 2025-09-12T17:22:03.492638Z INFO Daemon Daemon cloud-init is enabled: False Sep 12 17:22:03.496521 waagent[1991]: 2025-09-12T17:22:03.496494Z INFO Daemon Daemon Copying ovf-env.xml Sep 12 17:22:03.599727 waagent[1991]: 2025-09-12T17:22:03.599601Z INFO Daemon Daemon Successfully mounted dvd Sep 12 17:22:03.634903 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 12 17:22:03.636608 waagent[1991]: 2025-09-12T17:22:03.636553Z INFO Daemon Daemon Detect protocol endpoint Sep 12 17:22:03.640107 waagent[1991]: 2025-09-12T17:22:03.640077Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 12 17:22:03.644231 waagent[1991]: 2025-09-12T17:22:03.644205Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 12 17:22:03.649014 waagent[1991]: 2025-09-12T17:22:03.648992Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 12 17:22:03.652838 waagent[1991]: 2025-09-12T17:22:03.652810Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 12 17:22:03.656306 waagent[1991]: 2025-09-12T17:22:03.656278Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 12 17:22:03.699433 waagent[1991]: 2025-09-12T17:22:03.699394Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 12 17:22:03.703980 waagent[1991]: 2025-09-12T17:22:03.703956Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 12 17:22:03.707421 waagent[1991]: 2025-09-12T17:22:03.707397Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 12 17:22:03.776216 waagent[1991]: 2025-09-12T17:22:03.776136Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 12 17:22:03.780531 waagent[1991]: 2025-09-12T17:22:03.780503Z INFO Daemon Daemon Forcing an update of the goal state. Sep 12 17:22:03.787905 waagent[1991]: 2025-09-12T17:22:03.787870Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 12 17:22:03.843705 waagent[1991]: 2025-09-12T17:22:03.843672Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 12 17:22:03.847903 waagent[1991]: 2025-09-12T17:22:03.847872Z INFO Daemon Sep 12 17:22:03.849985 waagent[1991]: 2025-09-12T17:22:03.849927Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 411cb257-4509-4312-8b27-0185458b2168 eTag: 9005004651095145106 source: Fabric] Sep 12 17:22:03.857856 waagent[1991]: 2025-09-12T17:22:03.857827Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 12 17:22:03.862248 waagent[1991]: 2025-09-12T17:22:03.862220Z INFO Daemon Sep 12 17:22:03.864158 waagent[1991]: 2025-09-12T17:22:03.864136Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 12 17:22:03.872728 waagent[1991]: 2025-09-12T17:22:03.872702Z INFO Daemon Daemon Downloading artifacts profile blob Sep 12 17:22:03.930046 waagent[1991]: 2025-09-12T17:22:03.929991Z INFO Daemon Downloaded certificate {'thumbprint': '181A5CDDA3805E7368DE2CD96110E453BBD5D8A8', 'hasPrivateKey': True} Sep 12 17:22:03.937225 waagent[1991]: 2025-09-12T17:22:03.937191Z INFO Daemon Fetch goal state completed Sep 12 17:22:03.946552 waagent[1991]: 2025-09-12T17:22:03.946523Z INFO Daemon Daemon Starting provisioning Sep 12 17:22:03.950125 waagent[1991]: 2025-09-12T17:22:03.950096Z INFO Daemon Daemon Handle ovf-env.xml. Sep 12 17:22:03.953351 waagent[1991]: 2025-09-12T17:22:03.953330Z INFO Daemon Daemon Set hostname [ci-4426.1.0-a-9410d45923] Sep 12 17:22:03.985493 waagent[1991]: 2025-09-12T17:22:03.984770Z INFO Daemon Daemon Publish hostname [ci-4426.1.0-a-9410d45923] Sep 12 17:22:03.989195 waagent[1991]: 2025-09-12T17:22:03.989155Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 12 17:22:03.994009 waagent[1991]: 2025-09-12T17:22:03.993976Z INFO Daemon Daemon Primary interface is [eth0] Sep 12 17:22:04.016750 systemd-networkd[1679]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:22:04.016763 systemd-networkd[1679]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:22:04.016795 systemd-networkd[1679]: eth0: DHCP lease lost Sep 12 17:22:04.021533 waagent[1991]: 2025-09-12T17:22:04.017530Z INFO Daemon Daemon Create user account if not exists Sep 12 17:22:04.021848 waagent[1991]: 2025-09-12T17:22:04.021813Z INFO Daemon Daemon User core already exists, skip useradd Sep 12 17:22:04.026059 waagent[1991]: 2025-09-12T17:22:04.026022Z INFO Daemon Daemon Configure sudoer Sep 12 17:22:04.033557 waagent[1991]: 2025-09-12T17:22:04.033515Z INFO Daemon Daemon Configure sshd Sep 12 17:22:04.040186 waagent[1991]: 2025-09-12T17:22:04.040146Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 12 17:22:04.048957 waagent[1991]: 2025-09-12T17:22:04.048913Z INFO Daemon Daemon Deploy ssh public key. Sep 12 17:22:04.049505 systemd-networkd[1679]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 12 17:22:05.178592 waagent[1991]: 2025-09-12T17:22:05.178434Z INFO Daemon Daemon Provisioning complete Sep 12 17:22:05.191204 waagent[1991]: 2025-09-12T17:22:05.191170Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 12 17:22:05.196143 waagent[1991]: 2025-09-12T17:22:05.196111Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 12 17:22:05.202822 waagent[1991]: 2025-09-12T17:22:05.202797Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Sep 12 17:22:05.300281 waagent[2102]: 2025-09-12T17:22:05.300223Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Sep 12 17:22:05.301497 waagent[2102]: 2025-09-12T17:22:05.300676Z INFO ExtHandler ExtHandler OS: flatcar 4426.1.0 Sep 12 17:22:05.301497 waagent[2102]: 2025-09-12T17:22:05.300731Z INFO ExtHandler ExtHandler Python: 3.11.13 Sep 12 17:22:05.301497 waagent[2102]: 2025-09-12T17:22:05.300768Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Sep 12 17:22:05.374400 waagent[2102]: 2025-09-12T17:22:05.374347Z INFO ExtHandler ExtHandler Distro: flatcar-4426.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Sep 12 17:22:05.374695 waagent[2102]: 2025-09-12T17:22:05.374664Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 17:22:05.374814 waagent[2102]: 2025-09-12T17:22:05.374791Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 17:22:05.380899 waagent[2102]: 2025-09-12T17:22:05.380845Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 12 17:22:05.385952 waagent[2102]: 2025-09-12T17:22:05.385920Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 12 17:22:05.386400 waagent[2102]: 2025-09-12T17:22:05.386365Z INFO ExtHandler Sep 12 17:22:05.386551 waagent[2102]: 2025-09-12T17:22:05.386524Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: af7662ed-45b9-4f48-af1f-7da65f46190f eTag: 9005004651095145106 source: Fabric] Sep 12 17:22:05.386885 waagent[2102]: 2025-09-12T17:22:05.386855Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 12 17:22:05.387369 waagent[2102]: 2025-09-12T17:22:05.387337Z INFO ExtHandler Sep 12 17:22:05.387499 waagent[2102]: 2025-09-12T17:22:05.387458Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 12 17:22:05.391505 waagent[2102]: 2025-09-12T17:22:05.390931Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 12 17:22:05.446506 waagent[2102]: 2025-09-12T17:22:05.446372Z INFO ExtHandler Downloaded certificate {'thumbprint': '181A5CDDA3805E7368DE2CD96110E453BBD5D8A8', 'hasPrivateKey': True} Sep 12 17:22:05.446823 waagent[2102]: 2025-09-12T17:22:05.446789Z INFO ExtHandler Fetch goal state completed Sep 12 17:22:05.458473 waagent[2102]: 2025-09-12T17:22:05.458421Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.1 11 Feb 2025 (Library: OpenSSL 3.4.1 11 Feb 2025) Sep 12 17:22:05.461622 waagent[2102]: 2025-09-12T17:22:05.461579Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2102 Sep 12 17:22:05.461718 waagent[2102]: 2025-09-12T17:22:05.461694Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 12 17:22:05.461945 waagent[2102]: 2025-09-12T17:22:05.461920Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Sep 12 17:22:05.463013 waagent[2102]: 2025-09-12T17:22:05.462978Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4426.1.0', '', 'Flatcar Container Linux by Kinvolk'] Sep 12 17:22:05.463319 waagent[2102]: 2025-09-12T17:22:05.463290Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4426.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Sep 12 17:22:05.463428 waagent[2102]: 2025-09-12T17:22:05.463406Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 12 17:22:05.463896 waagent[2102]: 2025-09-12T17:22:05.463866Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 12 17:22:05.522240 waagent[2102]: 2025-09-12T17:22:05.522203Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 12 17:22:05.522393 waagent[2102]: 2025-09-12T17:22:05.522369Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 12 17:22:05.526956 waagent[2102]: 2025-09-12T17:22:05.526599Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 12 17:22:05.530913 systemd[1]: Reload requested from client PID 2117 ('systemctl') (unit waagent.service)... Sep 12 17:22:05.530925 systemd[1]: Reloading... Sep 12 17:22:05.607522 zram_generator::config[2165]: No configuration found. Sep 12 17:22:05.736435 systemd[1]: Reloading finished in 205 ms. Sep 12 17:22:05.754160 waagent[2102]: 2025-09-12T17:22:05.753502Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 12 17:22:05.754160 waagent[2102]: 2025-09-12T17:22:05.753647Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 12 17:22:06.766411 waagent[2102]: 2025-09-12T17:22:06.765674Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 12 17:22:06.766411 waagent[2102]: 2025-09-12T17:22:06.765977Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 12 17:22:06.766746 waagent[2102]: 2025-09-12T17:22:06.766614Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 17:22:06.766746 waagent[2102]: 2025-09-12T17:22:06.766680Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 17:22:06.766864 waagent[2102]: 2025-09-12T17:22:06.766834Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 12 17:22:06.766956 waagent[2102]: 2025-09-12T17:22:06.766915Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 12 17:22:06.767081 waagent[2102]: 2025-09-12T17:22:06.767052Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 12 17:22:06.767081 waagent[2102]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 12 17:22:06.767081 waagent[2102]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 12 17:22:06.767081 waagent[2102]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 12 17:22:06.767081 waagent[2102]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 12 17:22:06.767081 waagent[2102]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 12 17:22:06.767081 waagent[2102]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 12 17:22:06.767490 waagent[2102]: 2025-09-12T17:22:06.767443Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 12 17:22:06.767857 waagent[2102]: 2025-09-12T17:22:06.767819Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 12 17:22:06.767955 waagent[2102]: 2025-09-12T17:22:06.767919Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 12 17:22:06.768280 waagent[2102]: 2025-09-12T17:22:06.768246Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 12 17:22:06.768339 waagent[2102]: 2025-09-12T17:22:06.768319Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 12 17:22:06.768430 waagent[2102]: 2025-09-12T17:22:06.768394Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 12 17:22:06.768600 waagent[2102]: 2025-09-12T17:22:06.768491Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 17:22:06.768600 waagent[2102]: 2025-09-12T17:22:06.768538Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 17:22:06.768648 waagent[2102]: 2025-09-12T17:22:06.768620Z INFO EnvHandler ExtHandler Configure routes Sep 12 17:22:06.768665 waagent[2102]: 2025-09-12T17:22:06.768656Z INFO EnvHandler ExtHandler Gateway:None Sep 12 17:22:06.768696 waagent[2102]: 2025-09-12T17:22:06.768678Z INFO EnvHandler ExtHandler Routes:None Sep 12 17:22:06.774134 waagent[2102]: 2025-09-12T17:22:06.774105Z INFO ExtHandler ExtHandler Sep 12 17:22:06.774254 waagent[2102]: 2025-09-12T17:22:06.774231Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 8b7d6508-e30c-44ca-b261-31eb5503ce88 correlation ef22263b-0c00-40d1-a4a8-4f905e56d139 created: 2025-09-12T17:20:45.897179Z] Sep 12 17:22:06.774591 waagent[2102]: 2025-09-12T17:22:06.774564Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 12 17:22:06.775050 waagent[2102]: 2025-09-12T17:22:06.775025Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Sep 12 17:22:06.801218 waagent[2102]: 2025-09-12T17:22:06.801168Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Sep 12 17:22:06.801218 waagent[2102]: Try `iptables -h' or 'iptables --help' for more information.) Sep 12 17:22:06.801541 waagent[2102]: 2025-09-12T17:22:06.801509Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 3DC953DF-1EB8-4B74-8259-77B9B5E63755;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Sep 12 17:22:06.857627 waagent[2102]: 2025-09-12T17:22:06.857561Z INFO MonitorHandler ExtHandler Network interfaces: Sep 12 17:22:06.857627 waagent[2102]: Executing ['ip', '-a', '-o', 'link']: Sep 12 17:22:06.857627 waagent[2102]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 12 17:22:06.857627 waagent[2102]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fb:4c:57 brd ff:ff:ff:ff:ff:ff Sep 12 17:22:06.857627 waagent[2102]: 3: enP35297s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fb:4c:57 brd ff:ff:ff:ff:ff:ff\ altname enP35297p0s2 Sep 12 17:22:06.857627 waagent[2102]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 12 17:22:06.857627 waagent[2102]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 12 17:22:06.857627 waagent[2102]: 2: eth0 inet 10.200.20.11/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 12 17:22:06.857627 waagent[2102]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 12 17:22:06.857627 waagent[2102]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 12 17:22:06.857627 waagent[2102]: 2: eth0 inet6 fe80::20d:3aff:fefb:4c57/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 12 17:22:06.912624 waagent[2102]: 2025-09-12T17:22:06.912150Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Sep 12 17:22:06.912624 waagent[2102]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:22:06.912624 waagent[2102]: pkts bytes target prot opt in out source destination Sep 12 17:22:06.912624 waagent[2102]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:22:06.912624 waagent[2102]: pkts bytes target prot opt in out source destination Sep 12 17:22:06.912624 waagent[2102]: Chain OUTPUT (policy ACCEPT 5 packets, 466 bytes) Sep 12 17:22:06.912624 waagent[2102]: pkts bytes target prot opt in out source destination Sep 12 17:22:06.912624 waagent[2102]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 12 17:22:06.912624 waagent[2102]: 3 534 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 12 17:22:06.912624 waagent[2102]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 12 17:22:06.914872 waagent[2102]: 2025-09-12T17:22:06.914840Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 12 17:22:06.914872 waagent[2102]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:22:06.914872 waagent[2102]: pkts bytes target prot opt in out source destination Sep 12 17:22:06.914872 waagent[2102]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:22:06.914872 waagent[2102]: pkts bytes target prot opt in out source destination Sep 12 17:22:06.914872 waagent[2102]: Chain OUTPUT (policy ACCEPT 5 packets, 466 bytes) Sep 12 17:22:06.914872 waagent[2102]: pkts bytes target prot opt in out source destination Sep 12 17:22:06.914872 waagent[2102]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 12 17:22:06.914872 waagent[2102]: 7 950 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 12 17:22:06.914872 waagent[2102]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 12 17:22:06.915246 waagent[2102]: 2025-09-12T17:22:06.915224Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 12 17:22:12.289343 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:22:12.291090 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:22:12.388786 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:22:12.391729 (kubelet)[2252]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:22:12.499809 kubelet[2252]: E0912 17:22:12.499743 2252 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:22:12.502485 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:22:12.502596 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:22:12.503069 systemd[1]: kubelet.service: Consumed 186ms CPU time, 107.3M memory peak. Sep 12 17:22:22.539489 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:22:22.540709 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:22:22.631714 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:22:22.635851 (kubelet)[2267]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:22:22.754429 kubelet[2267]: E0912 17:22:22.754350 2267 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:22:22.756184 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:22:22.756294 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:22:22.756803 systemd[1]: kubelet.service: Consumed 103ms CPU time, 107.4M memory peak. Sep 12 17:22:24.549684 chronyd[1824]: Selected source PHC0 Sep 12 17:22:32.701298 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:22:32.703657 systemd[1]: Started sshd@0-10.200.20.11:22-10.200.16.10:33780.service - OpenSSH per-connection server daemon (10.200.16.10:33780). Sep 12 17:22:32.789176 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 17:22:32.790551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:22:32.959147 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:22:32.961717 (kubelet)[2285]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:22:32.985986 kubelet[2285]: E0912 17:22:32.985940 2285 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:22:32.987818 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:22:32.988035 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:22:32.988549 systemd[1]: kubelet.service: Consumed 99ms CPU time, 105M memory peak. Sep 12 17:22:33.756394 sshd[2275]: Accepted publickey for core from 10.200.16.10 port 33780 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:22:33.757484 sshd-session[2275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:22:33.761518 systemd-logind[1846]: New session 3 of user core. Sep 12 17:22:33.768762 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:22:34.187625 systemd[1]: Started sshd@1-10.200.20.11:22-10.200.16.10:33788.service - OpenSSH per-connection server daemon (10.200.16.10:33788). Sep 12 17:22:34.675759 sshd[2297]: Accepted publickey for core from 10.200.16.10 port 33788 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:22:34.676847 sshd-session[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:22:34.680386 systemd-logind[1846]: New session 4 of user core. Sep 12 17:22:34.687685 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:22:35.038803 sshd[2300]: Connection closed by 10.200.16.10 port 33788 Sep 12 17:22:35.039249 sshd-session[2297]: pam_unix(sshd:session): session closed for user core Sep 12 17:22:35.042137 systemd[1]: sshd@1-10.200.20.11:22-10.200.16.10:33788.service: Deactivated successfully. Sep 12 17:22:35.043406 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:22:35.044322 systemd-logind[1846]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:22:35.045404 systemd-logind[1846]: Removed session 4. Sep 12 17:22:35.126649 systemd[1]: Started sshd@2-10.200.20.11:22-10.200.16.10:33802.service - OpenSSH per-connection server daemon (10.200.16.10:33802). Sep 12 17:22:35.615883 sshd[2306]: Accepted publickey for core from 10.200.16.10 port 33802 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:22:35.616948 sshd-session[2306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:22:35.620395 systemd-logind[1846]: New session 5 of user core. Sep 12 17:22:35.628763 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:22:35.956768 sshd[2309]: Connection closed by 10.200.16.10 port 33802 Sep 12 17:22:35.957228 sshd-session[2306]: pam_unix(sshd:session): session closed for user core Sep 12 17:22:35.960106 systemd[1]: sshd@2-10.200.20.11:22-10.200.16.10:33802.service: Deactivated successfully. Sep 12 17:22:35.961356 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:22:35.962242 systemd-logind[1846]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:22:35.963150 systemd-logind[1846]: Removed session 5. Sep 12 17:22:36.043752 systemd[1]: Started sshd@3-10.200.20.11:22-10.200.16.10:33814.service - OpenSSH per-connection server daemon (10.200.16.10:33814). Sep 12 17:22:36.534077 sshd[2315]: Accepted publickey for core from 10.200.16.10 port 33814 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:22:36.535112 sshd-session[2315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:22:36.538536 systemd-logind[1846]: New session 6 of user core. Sep 12 17:22:36.545580 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:22:36.883493 sshd[2318]: Connection closed by 10.200.16.10 port 33814 Sep 12 17:22:36.883970 sshd-session[2315]: pam_unix(sshd:session): session closed for user core Sep 12 17:22:36.887720 systemd[1]: sshd@3-10.200.20.11:22-10.200.16.10:33814.service: Deactivated successfully. Sep 12 17:22:36.889046 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:22:36.889629 systemd-logind[1846]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:22:36.890621 systemd-logind[1846]: Removed session 6. Sep 12 17:22:36.973717 systemd[1]: Started sshd@4-10.200.20.11:22-10.200.16.10:33820.service - OpenSSH per-connection server daemon (10.200.16.10:33820). Sep 12 17:22:37.464260 sshd[2324]: Accepted publickey for core from 10.200.16.10 port 33820 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:22:37.465309 sshd-session[2324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:22:37.468737 systemd-logind[1846]: New session 7 of user core. Sep 12 17:22:37.475596 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:22:37.911494 sudo[2328]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:22:37.911712 sudo[2328]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:22:37.935852 sudo[2328]: pam_unix(sudo:session): session closed for user root Sep 12 17:22:38.006706 sshd[2327]: Connection closed by 10.200.16.10 port 33820 Sep 12 17:22:38.007634 sshd-session[2324]: pam_unix(sshd:session): session closed for user core Sep 12 17:22:38.010117 systemd[1]: sshd@4-10.200.20.11:22-10.200.16.10:33820.service: Deactivated successfully. Sep 12 17:22:38.011415 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:22:38.012886 systemd-logind[1846]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:22:38.013629 systemd-logind[1846]: Removed session 7. Sep 12 17:22:38.097814 systemd[1]: Started sshd@5-10.200.20.11:22-10.200.16.10:33822.service - OpenSSH per-connection server daemon (10.200.16.10:33822). Sep 12 17:22:38.587798 sshd[2334]: Accepted publickey for core from 10.200.16.10 port 33822 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:22:38.588868 sshd-session[2334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:22:38.592219 systemd-logind[1846]: New session 8 of user core. Sep 12 17:22:38.599685 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:22:38.861460 sudo[2339]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:22:38.862032 sudo[2339]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:22:38.868191 sudo[2339]: pam_unix(sudo:session): session closed for user root Sep 12 17:22:38.871485 sudo[2338]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 17:22:38.871675 sudo[2338]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:22:38.878152 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:22:38.910167 augenrules[2361]: No rules Sep 12 17:22:38.911182 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:22:38.911346 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:22:38.914010 sudo[2338]: pam_unix(sudo:session): session closed for user root Sep 12 17:22:39.004682 sshd[2337]: Connection closed by 10.200.16.10 port 33822 Sep 12 17:22:39.004130 sshd-session[2334]: pam_unix(sshd:session): session closed for user core Sep 12 17:22:39.006975 systemd-logind[1846]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:22:39.007204 systemd[1]: sshd@5-10.200.20.11:22-10.200.16.10:33822.service: Deactivated successfully. Sep 12 17:22:39.008579 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:22:39.010226 systemd-logind[1846]: Removed session 8. Sep 12 17:22:39.095000 systemd[1]: Started sshd@6-10.200.20.11:22-10.200.16.10:33834.service - OpenSSH per-connection server daemon (10.200.16.10:33834). Sep 12 17:22:39.584101 sshd[2370]: Accepted publickey for core from 10.200.16.10 port 33834 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:22:39.585153 sshd-session[2370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:22:39.588536 systemd-logind[1846]: New session 9 of user core. Sep 12 17:22:39.596748 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:22:39.857197 sudo[2374]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:22:39.857403 sudo[2374]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:22:41.295313 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:22:41.303854 (dockerd)[2393]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:22:42.228949 dockerd[2393]: time="2025-09-12T17:22:42.228898956Z" level=info msg="Starting up" Sep 12 17:22:42.229599 dockerd[2393]: time="2025-09-12T17:22:42.229574157Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 17:22:42.237189 dockerd[2393]: time="2025-09-12T17:22:42.237164165Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 17:22:42.319902 dockerd[2393]: time="2025-09-12T17:22:42.319874397Z" level=info msg="Loading containers: start." Sep 12 17:22:42.372487 kernel: Initializing XFRM netlink socket Sep 12 17:22:42.820361 systemd-networkd[1679]: docker0: Link UP Sep 12 17:22:42.835976 dockerd[2393]: time="2025-09-12T17:22:42.835942374Z" level=info msg="Loading containers: done." Sep 12 17:22:42.845371 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3490816174-merged.mount: Deactivated successfully. Sep 12 17:22:42.858850 dockerd[2393]: time="2025-09-12T17:22:42.858817790Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:22:42.858934 dockerd[2393]: time="2025-09-12T17:22:42.858906276Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 17:22:42.859005 dockerd[2393]: time="2025-09-12T17:22:42.858988450Z" level=info msg="Initializing buildkit" Sep 12 17:22:42.910793 dockerd[2393]: time="2025-09-12T17:22:42.910753375Z" level=info msg="Completed buildkit initialization" Sep 12 17:22:42.913779 dockerd[2393]: time="2025-09-12T17:22:42.913754993Z" level=info msg="Daemon has completed initialization" Sep 12 17:22:42.914073 dockerd[2393]: time="2025-09-12T17:22:42.913868713Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:22:42.913996 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:22:43.039206 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 12 17:22:43.041031 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:22:43.159597 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:22:43.166692 (kubelet)[2609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:22:43.191162 kubelet[2609]: E0912 17:22:43.191095 2609 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:22:43.193084 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:22:43.193286 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:22:43.193784 systemd[1]: kubelet.service: Consumed 101ms CPU time, 104.8M memory peak. Sep 12 17:22:43.572486 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Sep 12 17:22:43.844185 containerd[1875]: time="2025-09-12T17:22:43.844074139Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 12 17:22:44.624333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4289913957.mount: Deactivated successfully. Sep 12 17:22:45.754236 containerd[1875]: time="2025-09-12T17:22:45.754181609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:22:45.756926 containerd[1875]: time="2025-09-12T17:22:45.756894188Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363685" Sep 12 17:22:45.759986 containerd[1875]: time="2025-09-12T17:22:45.759958149Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:22:45.766475 containerd[1875]: time="2025-09-12T17:22:45.766427637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:22:45.767202 containerd[1875]: time="2025-09-12T17:22:45.767013668Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 1.922903928s" Sep 12 17:22:45.767202 containerd[1875]: time="2025-09-12T17:22:45.767042293Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Sep 12 17:22:45.767792 containerd[1875]: time="2025-09-12T17:22:45.767738185Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 12 17:22:46.068499 update_engine[1850]: I20250912 17:22:46.068190 1850 update_attempter.cc:509] Updating boot flags... Sep 12 17:22:47.064494 containerd[1875]: time="2025-09-12T17:22:47.064188755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:22:47.069868 containerd[1875]: time="2025-09-12T17:22:47.069834642Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531200" Sep 12 17:22:47.073191 containerd[1875]: time="2025-09-12T17:22:47.073152573Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:22:47.079601 containerd[1875]: time="2025-09-12T17:22:47.079540226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:22:47.079838 containerd[1875]: time="2025-09-12T17:22:47.079813005Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.31193115s" Sep 12 17:22:47.079924 containerd[1875]: time="2025-09-12T17:22:47.079839422Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Sep 12 17:22:47.080547 containerd[1875]: time="2025-09-12T17:22:47.080529329Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 12 17:22:48.158500 containerd[1875]: time="2025-09-12T17:22:48.158443196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:22:48.161350 containerd[1875]: time="2025-09-12T17:22:48.161326446Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484324" Sep 12 17:22:48.164360 containerd[1875]: time="2025-09-12T17:22:48.164325341Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:22:48.170493 containerd[1875]: time="2025-09-12T17:22:48.170214085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:22:48.170666 containerd[1875]: time="2025-09-12T17:22:48.170647591Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.090025906s" Sep 12 17:22:48.170732 containerd[1875]: time="2025-09-12T17:22:48.170720121Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Sep 12 17:22:48.171201 containerd[1875]: time="2025-09-12T17:22:48.171181028Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 12 17:22:49.139345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1183393672.mount: Deactivated successfully. Sep 12 17:22:49.889688 containerd[1875]: time="2025-09-12T17:22:49.889637091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:22:49.893555 containerd[1875]: time="2025-09-12T17:22:49.893334062Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417817" Sep 12 17:22:49.898447 containerd[1875]: time="2025-09-12T17:22:49.898401786Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:22:49.904568 containerd[1875]: time="2025-09-12T17:22:49.904524786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:22:49.904974 containerd[1875]: time="2025-09-12T17:22:49.904831495Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.733621002s" Sep 12 17:22:49.904974 containerd[1875]: time="2025-09-12T17:22:49.904859024Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Sep 12 17:22:49.905487 containerd[1875]: time="2025-09-12T17:22:49.905461689Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:22:50.622988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1175493799.mount: Deactivated successfully. Sep 12 17:22:51.444417 containerd[1875]: time="2025-09-12T17:22:51.444351823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:22:51.448382 containerd[1875]: time="2025-09-12T17:22:51.448345654Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 12 17:22:51.451903 containerd[1875]: time="2025-09-12T17:22:51.451879658Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:22:51.455791 containerd[1875]: time="2025-09-12T17:22:51.455755501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:22:51.456844 containerd[1875]: time="2025-09-12T17:22:51.456511860Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.550935398s" Sep 12 17:22:51.456844 containerd[1875]: time="2025-09-12T17:22:51.456539805Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 12 17:22:51.457163 containerd[1875]: time="2025-09-12T17:22:51.457137311Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:22:52.074210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2556151044.mount: Deactivated successfully. Sep 12 17:22:52.094911 containerd[1875]: time="2025-09-12T17:22:52.094876387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:22:52.099199 containerd[1875]: time="2025-09-12T17:22:52.099161374Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 12 17:22:52.101986 containerd[1875]: time="2025-09-12T17:22:52.101966468Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:22:52.105415 containerd[1875]: time="2025-09-12T17:22:52.105381187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:22:52.106016 containerd[1875]: time="2025-09-12T17:22:52.105783532Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 648.559265ms" Sep 12 17:22:52.106016 containerd[1875]: time="2025-09-12T17:22:52.105811117Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 17:22:52.106367 containerd[1875]: time="2025-09-12T17:22:52.106347667Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 12 17:22:52.840581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2156916126.mount: Deactivated successfully. Sep 12 17:22:53.289301 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 12 17:22:53.291569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:22:53.388098 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:22:53.390867 (kubelet)[2854]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:22:53.419477 kubelet[2854]: E0912 17:22:53.419374 2854 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:22:53.422746 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:22:53.422848 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:22:53.423255 systemd[1]: kubelet.service: Consumed 101ms CPU time, 106.9M memory peak. Sep 12 17:22:55.495898 containerd[1875]: time="2025-09-12T17:22:55.495838842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:22:55.499219 containerd[1875]: time="2025-09-12T17:22:55.499037136Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Sep 12 17:22:55.502513 containerd[1875]: time="2025-09-12T17:22:55.502486760Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:22:55.508530 containerd[1875]: time="2025-09-12T17:22:55.508470547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:22:55.509165 containerd[1875]: time="2025-09-12T17:22:55.509033458Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.402663878s" Sep 12 17:22:55.509165 containerd[1875]: time="2025-09-12T17:22:55.509063332Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 12 17:22:57.512339 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:22:57.513070 systemd[1]: kubelet.service: Consumed 101ms CPU time, 106.9M memory peak. Sep 12 17:22:57.515276 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:22:57.541580 systemd[1]: Reload requested from client PID 2907 ('systemctl') (unit session-9.scope)... Sep 12 17:22:57.541593 systemd[1]: Reloading... Sep 12 17:22:57.628502 zram_generator::config[2954]: No configuration found. Sep 12 17:22:57.774044 systemd[1]: Reloading finished in 232 ms. Sep 12 17:22:57.830884 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:22:57.830947 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:22:57.832502 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:22:57.832544 systemd[1]: kubelet.service: Consumed 72ms CPU time, 95M memory peak. Sep 12 17:22:57.833617 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:22:58.115089 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:22:58.128702 (kubelet)[3021]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:22:58.261071 kubelet[3021]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:22:58.261071 kubelet[3021]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:22:58.261071 kubelet[3021]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:22:58.261405 kubelet[3021]: I0912 17:22:58.261111 3021 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:22:58.687461 kubelet[3021]: I0912 17:22:58.687419 3021 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 17:22:58.687461 kubelet[3021]: I0912 17:22:58.687453 3021 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:22:58.687720 kubelet[3021]: I0912 17:22:58.687703 3021 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 17:22:58.709395 kubelet[3021]: E0912 17:22:58.709357 3021 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:22:58.710578 kubelet[3021]: I0912 17:22:58.710411 3021 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:22:58.714419 kubelet[3021]: I0912 17:22:58.714405 3021 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:22:58.717861 kubelet[3021]: I0912 17:22:58.717836 3021 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:22:58.718971 kubelet[3021]: I0912 17:22:58.718636 3021 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:22:58.718971 kubelet[3021]: I0912 17:22:58.718671 3021 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4426.1.0-a-9410d45923","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:22:58.718971 kubelet[3021]: I0912 17:22:58.718806 3021 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:22:58.718971 kubelet[3021]: I0912 17:22:58.718813 3021 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 17:22:58.719130 kubelet[3021]: I0912 17:22:58.718929 3021 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:22:58.721702 kubelet[3021]: I0912 17:22:58.721684 3021 kubelet.go:446] "Attempting to sync node with API server" Sep 12 17:22:58.721796 kubelet[3021]: I0912 17:22:58.721785 3021 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:22:58.721865 kubelet[3021]: I0912 17:22:58.721857 3021 kubelet.go:352] "Adding apiserver pod source" Sep 12 17:22:58.721914 kubelet[3021]: I0912 17:22:58.721904 3021 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:22:58.725491 kubelet[3021]: W0912 17:22:58.723949 3021 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4426.1.0-a-9410d45923&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Sep 12 17:22:58.725491 kubelet[3021]: E0912 17:22:58.723989 3021 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4426.1.0-a-9410d45923&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:22:58.725491 kubelet[3021]: W0912 17:22:58.725356 3021 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Sep 12 17:22:58.725491 kubelet[3021]: E0912 17:22:58.725387 3021 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:22:58.725491 kubelet[3021]: I0912 17:22:58.725453 3021 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:22:58.725822 kubelet[3021]: I0912 17:22:58.725798 3021 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:22:58.725859 kubelet[3021]: W0912 17:22:58.725844 3021 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:22:58.726279 kubelet[3021]: I0912 17:22:58.726253 3021 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:22:58.726327 kubelet[3021]: I0912 17:22:58.726285 3021 server.go:1287] "Started kubelet" Sep 12 17:22:58.731327 kubelet[3021]: E0912 17:22:58.731195 3021 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.11:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4426.1.0-a-9410d45923.186498d12b78f214 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4426.1.0-a-9410d45923,UID:ci-4426.1.0-a-9410d45923,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4426.1.0-a-9410d45923,},FirstTimestamp:2025-09-12 17:22:58.72626946 +0000 UTC m=+0.595306386,LastTimestamp:2025-09-12 17:22:58.72626946 +0000 UTC m=+0.595306386,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4426.1.0-a-9410d45923,}" Sep 12 17:22:58.732612 kubelet[3021]: I0912 17:22:58.732568 3021 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:22:58.733763 kubelet[3021]: I0912 17:22:58.733740 3021 server.go:479] "Adding debug handlers to kubelet server" Sep 12 17:22:58.734862 kubelet[3021]: E0912 17:22:58.734825 3021 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:22:58.735016 kubelet[3021]: I0912 17:22:58.734972 3021 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:22:58.735273 kubelet[3021]: I0912 17:22:58.735248 3021 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:22:58.736359 kubelet[3021]: I0912 17:22:58.736336 3021 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:22:58.737596 kubelet[3021]: I0912 17:22:58.737566 3021 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:22:58.738856 kubelet[3021]: E0912 17:22:58.738818 3021 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.1.0-a-9410d45923\" not found" Sep 12 17:22:58.738856 kubelet[3021]: I0912 17:22:58.738860 3021 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:22:58.739025 kubelet[3021]: I0912 17:22:58.739003 3021 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:22:58.739082 kubelet[3021]: I0912 17:22:58.739071 3021 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:22:58.739353 kubelet[3021]: W0912 17:22:58.739319 3021 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Sep 12 17:22:58.739353 kubelet[3021]: E0912 17:22:58.739352 3021 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:22:58.739908 kubelet[3021]: E0912 17:22:58.739875 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.1.0-a-9410d45923?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="200ms" Sep 12 17:22:58.740760 kubelet[3021]: I0912 17:22:58.740735 3021 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:22:58.742107 kubelet[3021]: I0912 17:22:58.742083 3021 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:22:58.742107 kubelet[3021]: I0912 17:22:58.742103 3021 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:22:58.765761 kubelet[3021]: I0912 17:22:58.765737 3021 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:22:58.765761 kubelet[3021]: I0912 17:22:58.765763 3021 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:22:58.765882 kubelet[3021]: I0912 17:22:58.765787 3021 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:22:58.776446 kubelet[3021]: I0912 17:22:58.776187 3021 policy_none.go:49] "None policy: Start" Sep 12 17:22:58.776446 kubelet[3021]: I0912 17:22:58.776213 3021 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:22:58.776446 kubelet[3021]: I0912 17:22:58.776226 3021 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:22:58.784896 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:22:58.794625 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:22:58.795378 kubelet[3021]: I0912 17:22:58.795350 3021 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:22:58.796844 kubelet[3021]: I0912 17:22:58.796581 3021 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:22:58.796844 kubelet[3021]: I0912 17:22:58.796602 3021 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 17:22:58.796844 kubelet[3021]: I0912 17:22:58.796619 3021 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:22:58.796844 kubelet[3021]: I0912 17:22:58.796624 3021 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 17:22:58.796844 kubelet[3021]: E0912 17:22:58.796655 3021 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:22:58.797803 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:22:58.799562 kubelet[3021]: W0912 17:22:58.799527 3021 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Sep 12 17:22:58.799740 kubelet[3021]: E0912 17:22:58.799658 3021 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:22:58.803203 kubelet[3021]: I0912 17:22:58.803176 3021 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:22:58.803813 kubelet[3021]: I0912 17:22:58.803794 3021 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:22:58.803849 kubelet[3021]: I0912 17:22:58.803813 3021 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:22:58.804055 kubelet[3021]: I0912 17:22:58.804039 3021 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:22:58.806331 kubelet[3021]: E0912 17:22:58.806312 3021 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:22:58.806522 kubelet[3021]: E0912 17:22:58.806509 3021 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4426.1.0-a-9410d45923\" not found" Sep 12 17:22:58.904926 kubelet[3021]: I0912 17:22:58.904888 3021 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.1.0-a-9410d45923" Sep 12 17:22:58.905355 kubelet[3021]: E0912 17:22:58.905324 3021 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4426.1.0-a-9410d45923" Sep 12 17:22:58.906285 systemd[1]: Created slice kubepods-burstable-pod6a66a313f9d444eb5a1427726a96d703.slice - libcontainer container kubepods-burstable-pod6a66a313f9d444eb5a1427726a96d703.slice. Sep 12 17:22:58.917999 kubelet[3021]: E0912 17:22:58.917979 3021 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-a-9410d45923\" not found" node="ci-4426.1.0-a-9410d45923" Sep 12 17:22:58.922069 systemd[1]: Created slice kubepods-burstable-podd0b477100288d6664a738b0c3a1523fa.slice - libcontainer container kubepods-burstable-podd0b477100288d6664a738b0c3a1523fa.slice. Sep 12 17:22:58.923797 kubelet[3021]: E0912 17:22:58.923783 3021 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-a-9410d45923\" not found" node="ci-4426.1.0-a-9410d45923" Sep 12 17:22:58.925744 systemd[1]: Created slice kubepods-burstable-pod45ec4ff6a682c61e6a803d9cb0dd0891.slice - libcontainer container kubepods-burstable-pod45ec4ff6a682c61e6a803d9cb0dd0891.slice. Sep 12 17:22:58.927140 kubelet[3021]: E0912 17:22:58.927067 3021 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-a-9410d45923\" not found" node="ci-4426.1.0-a-9410d45923" Sep 12 17:22:58.941257 kubelet[3021]: I0912 17:22:58.940302 3021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6a66a313f9d444eb5a1427726a96d703-k8s-certs\") pod \"kube-apiserver-ci-4426.1.0-a-9410d45923\" (UID: \"6a66a313f9d444eb5a1427726a96d703\") " pod="kube-system/kube-apiserver-ci-4426.1.0-a-9410d45923" Sep 12 17:22:58.941257 kubelet[3021]: I0912 17:22:58.940490 3021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6a66a313f9d444eb5a1427726a96d703-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4426.1.0-a-9410d45923\" (UID: \"6a66a313f9d444eb5a1427726a96d703\") " pod="kube-system/kube-apiserver-ci-4426.1.0-a-9410d45923" Sep 12 17:22:58.941257 kubelet[3021]: I0912 17:22:58.940511 3021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d0b477100288d6664a738b0c3a1523fa-flexvolume-dir\") pod \"kube-controller-manager-ci-4426.1.0-a-9410d45923\" (UID: \"d0b477100288d6664a738b0c3a1523fa\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-9410d45923" Sep 12 17:22:58.941257 kubelet[3021]: I0912 17:22:58.940523 3021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0b477100288d6664a738b0c3a1523fa-kubeconfig\") pod \"kube-controller-manager-ci-4426.1.0-a-9410d45923\" (UID: \"d0b477100288d6664a738b0c3a1523fa\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-9410d45923" Sep 12 17:22:58.941257 kubelet[3021]: I0912 17:22:58.940533 3021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6a66a313f9d444eb5a1427726a96d703-ca-certs\") pod \"kube-apiserver-ci-4426.1.0-a-9410d45923\" (UID: \"6a66a313f9d444eb5a1427726a96d703\") " pod="kube-system/kube-apiserver-ci-4426.1.0-a-9410d45923" Sep 12 17:22:58.941393 kubelet[3021]: I0912 17:22:58.940542 3021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d0b477100288d6664a738b0c3a1523fa-ca-certs\") pod \"kube-controller-manager-ci-4426.1.0-a-9410d45923\" (UID: \"d0b477100288d6664a738b0c3a1523fa\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-9410d45923" Sep 12 17:22:58.941393 kubelet[3021]: I0912 17:22:58.940552 3021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d0b477100288d6664a738b0c3a1523fa-k8s-certs\") pod \"kube-controller-manager-ci-4426.1.0-a-9410d45923\" (UID: \"d0b477100288d6664a738b0c3a1523fa\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-9410d45923" Sep 12 17:22:58.941393 kubelet[3021]: I0912 17:22:58.940562 3021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d0b477100288d6664a738b0c3a1523fa-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4426.1.0-a-9410d45923\" (UID: \"d0b477100288d6664a738b0c3a1523fa\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-9410d45923" Sep 12 17:22:58.941393 kubelet[3021]: I0912 17:22:58.940589 3021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/45ec4ff6a682c61e6a803d9cb0dd0891-kubeconfig\") pod \"kube-scheduler-ci-4426.1.0-a-9410d45923\" (UID: \"45ec4ff6a682c61e6a803d9cb0dd0891\") " pod="kube-system/kube-scheduler-ci-4426.1.0-a-9410d45923" Sep 12 17:22:58.941620 kubelet[3021]: E0912 17:22:58.941591 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.1.0-a-9410d45923?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="400ms" Sep 12 17:22:59.107019 kubelet[3021]: I0912 17:22:59.106986 3021 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.1.0-a-9410d45923" Sep 12 17:22:59.107308 kubelet[3021]: E0912 17:22:59.107273 3021 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4426.1.0-a-9410d45923" Sep 12 17:22:59.219558 containerd[1875]: time="2025-09-12T17:22:59.219382358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4426.1.0-a-9410d45923,Uid:6a66a313f9d444eb5a1427726a96d703,Namespace:kube-system,Attempt:0,}" Sep 12 17:22:59.225107 containerd[1875]: time="2025-09-12T17:22:59.225071585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4426.1.0-a-9410d45923,Uid:d0b477100288d6664a738b0c3a1523fa,Namespace:kube-system,Attempt:0,}" Sep 12 17:22:59.233842 containerd[1875]: time="2025-09-12T17:22:59.233816562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4426.1.0-a-9410d45923,Uid:45ec4ff6a682c61e6a803d9cb0dd0891,Namespace:kube-system,Attempt:0,}" Sep 12 17:22:59.321446 containerd[1875]: time="2025-09-12T17:22:59.321047008Z" level=info msg="connecting to shim a4e87e83d08f026666a8cf6aa4a9aafd0f2df316e06d642ea1c298512a7e02a1" address="unix:///run/containerd/s/cf536ffc3a7380f2e2c66f8d0a2a58125c5bd7d19dbab2b7df37e13898f5c0fe" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:22:59.322198 containerd[1875]: time="2025-09-12T17:22:59.322176255Z" level=info msg="connecting to shim fc4f1348f1968a5a2e6fdf1f4b4337b29a59239403ab90eceb6a0b89c21d5e99" address="unix:///run/containerd/s/4e74e92d889cd6bcc0fbebef81b5dfe8a9c2b70eb233607994dcc6df27be0b19" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:22:59.341828 containerd[1875]: time="2025-09-12T17:22:59.341713165Z" level=info msg="connecting to shim 9287a9c8b79893b6bb203f56f931e984c06717e84ccdde1d1fc917c46c19db56" address="unix:///run/containerd/s/d9d5e6699ad4e38ce7174b3a45634d22dedc92fc07f1434f752278fc07950652" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:22:59.342089 kubelet[3021]: E0912 17:22:59.342063 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.1.0-a-9410d45923?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="800ms" Sep 12 17:22:59.343589 systemd[1]: Started cri-containerd-fc4f1348f1968a5a2e6fdf1f4b4337b29a59239403ab90eceb6a0b89c21d5e99.scope - libcontainer container fc4f1348f1968a5a2e6fdf1f4b4337b29a59239403ab90eceb6a0b89c21d5e99. Sep 12 17:22:59.348972 systemd[1]: Started cri-containerd-a4e87e83d08f026666a8cf6aa4a9aafd0f2df316e06d642ea1c298512a7e02a1.scope - libcontainer container a4e87e83d08f026666a8cf6aa4a9aafd0f2df316e06d642ea1c298512a7e02a1. Sep 12 17:22:59.356535 systemd[1]: Started cri-containerd-9287a9c8b79893b6bb203f56f931e984c06717e84ccdde1d1fc917c46c19db56.scope - libcontainer container 9287a9c8b79893b6bb203f56f931e984c06717e84ccdde1d1fc917c46c19db56. Sep 12 17:22:59.400124 containerd[1875]: time="2025-09-12T17:22:59.399999241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4426.1.0-a-9410d45923,Uid:45ec4ff6a682c61e6a803d9cb0dd0891,Namespace:kube-system,Attempt:0,} returns sandbox id \"9287a9c8b79893b6bb203f56f931e984c06717e84ccdde1d1fc917c46c19db56\"" Sep 12 17:22:59.408149 containerd[1875]: time="2025-09-12T17:22:59.408060494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4426.1.0-a-9410d45923,Uid:6a66a313f9d444eb5a1427726a96d703,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4e87e83d08f026666a8cf6aa4a9aafd0f2df316e06d642ea1c298512a7e02a1\"" Sep 12 17:22:59.410672 containerd[1875]: time="2025-09-12T17:22:59.410646801Z" level=info msg="CreateContainer within sandbox \"9287a9c8b79893b6bb203f56f931e984c06717e84ccdde1d1fc917c46c19db56\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:22:59.411238 containerd[1875]: time="2025-09-12T17:22:59.411211176Z" level=info msg="CreateContainer within sandbox \"a4e87e83d08f026666a8cf6aa4a9aafd0f2df316e06d642ea1c298512a7e02a1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:22:59.412599 containerd[1875]: time="2025-09-12T17:22:59.412579953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4426.1.0-a-9410d45923,Uid:d0b477100288d6664a738b0c3a1523fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc4f1348f1968a5a2e6fdf1f4b4337b29a59239403ab90eceb6a0b89c21d5e99\"" Sep 12 17:22:59.414933 containerd[1875]: time="2025-09-12T17:22:59.414483359Z" level=info msg="CreateContainer within sandbox \"fc4f1348f1968a5a2e6fdf1f4b4337b29a59239403ab90eceb6a0b89c21d5e99\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:22:59.439045 containerd[1875]: time="2025-09-12T17:22:59.439010923Z" level=info msg="Container 84f2340327e2899dab2be856dbe3a639245e2fbce0c85dfd455514825375ebe8: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:22:59.446484 containerd[1875]: time="2025-09-12T17:22:59.446329553Z" level=info msg="Container 8688afee498ac25ad69af096c464908f310c5e2e21495f93f943b2797e132257: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:22:59.454438 containerd[1875]: time="2025-09-12T17:22:59.454405494Z" level=info msg="Container ea6c9b0266cfe430c296f68434f14ea008870684b2d2b5bb3f409fec75e5c5a1: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:22:59.486509 containerd[1875]: time="2025-09-12T17:22:59.486081737Z" level=info msg="CreateContainer within sandbox \"a4e87e83d08f026666a8cf6aa4a9aafd0f2df316e06d642ea1c298512a7e02a1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8688afee498ac25ad69af096c464908f310c5e2e21495f93f943b2797e132257\"" Sep 12 17:22:59.488114 containerd[1875]: time="2025-09-12T17:22:59.488072819Z" level=info msg="CreateContainer within sandbox \"fc4f1348f1968a5a2e6fdf1f4b4337b29a59239403ab90eceb6a0b89c21d5e99\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"84f2340327e2899dab2be856dbe3a639245e2fbce0c85dfd455514825375ebe8\"" Sep 12 17:22:59.488591 containerd[1875]: time="2025-09-12T17:22:59.488565040Z" level=info msg="StartContainer for \"84f2340327e2899dab2be856dbe3a639245e2fbce0c85dfd455514825375ebe8\"" Sep 12 17:22:59.493376 containerd[1875]: time="2025-09-12T17:22:59.493351981Z" level=info msg="StartContainer for \"8688afee498ac25ad69af096c464908f310c5e2e21495f93f943b2797e132257\"" Sep 12 17:22:59.493716 containerd[1875]: time="2025-09-12T17:22:59.493696155Z" level=info msg="connecting to shim 84f2340327e2899dab2be856dbe3a639245e2fbce0c85dfd455514825375ebe8" address="unix:///run/containerd/s/4e74e92d889cd6bcc0fbebef81b5dfe8a9c2b70eb233607994dcc6df27be0b19" protocol=ttrpc version=3 Sep 12 17:22:59.494562 containerd[1875]: time="2025-09-12T17:22:59.494366831Z" level=info msg="connecting to shim 8688afee498ac25ad69af096c464908f310c5e2e21495f93f943b2797e132257" address="unix:///run/containerd/s/cf536ffc3a7380f2e2c66f8d0a2a58125c5bd7d19dbab2b7df37e13898f5c0fe" protocol=ttrpc version=3 Sep 12 17:22:59.494562 containerd[1875]: time="2025-09-12T17:22:59.494457803Z" level=info msg="CreateContainer within sandbox \"9287a9c8b79893b6bb203f56f931e984c06717e84ccdde1d1fc917c46c19db56\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ea6c9b0266cfe430c296f68434f14ea008870684b2d2b5bb3f409fec75e5c5a1\"" Sep 12 17:22:59.495368 containerd[1875]: time="2025-09-12T17:22:59.495099717Z" level=info msg="StartContainer for \"ea6c9b0266cfe430c296f68434f14ea008870684b2d2b5bb3f409fec75e5c5a1\"" Sep 12 17:22:59.495856 containerd[1875]: time="2025-09-12T17:22:59.495834628Z" level=info msg="connecting to shim ea6c9b0266cfe430c296f68434f14ea008870684b2d2b5bb3f409fec75e5c5a1" address="unix:///run/containerd/s/d9d5e6699ad4e38ce7174b3a45634d22dedc92fc07f1434f752278fc07950652" protocol=ttrpc version=3 Sep 12 17:22:59.509394 kubelet[3021]: I0912 17:22:59.509370 3021 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.1.0-a-9410d45923" Sep 12 17:22:59.510214 kubelet[3021]: E0912 17:22:59.510182 3021 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4426.1.0-a-9410d45923" Sep 12 17:22:59.511601 systemd[1]: Started cri-containerd-ea6c9b0266cfe430c296f68434f14ea008870684b2d2b5bb3f409fec75e5c5a1.scope - libcontainer container ea6c9b0266cfe430c296f68434f14ea008870684b2d2b5bb3f409fec75e5c5a1. Sep 12 17:22:59.514998 systemd[1]: Started cri-containerd-84f2340327e2899dab2be856dbe3a639245e2fbce0c85dfd455514825375ebe8.scope - libcontainer container 84f2340327e2899dab2be856dbe3a639245e2fbce0c85dfd455514825375ebe8. Sep 12 17:22:59.520113 systemd[1]: Started cri-containerd-8688afee498ac25ad69af096c464908f310c5e2e21495f93f943b2797e132257.scope - libcontainer container 8688afee498ac25ad69af096c464908f310c5e2e21495f93f943b2797e132257. Sep 12 17:22:59.573245 containerd[1875]: time="2025-09-12T17:22:59.573210260Z" level=info msg="StartContainer for \"ea6c9b0266cfe430c296f68434f14ea008870684b2d2b5bb3f409fec75e5c5a1\" returns successfully" Sep 12 17:22:59.580601 containerd[1875]: time="2025-09-12T17:22:59.580572300Z" level=info msg="StartContainer for \"84f2340327e2899dab2be856dbe3a639245e2fbce0c85dfd455514825375ebe8\" returns successfully" Sep 12 17:22:59.586775 containerd[1875]: time="2025-09-12T17:22:59.586633702Z" level=info msg="StartContainer for \"8688afee498ac25ad69af096c464908f310c5e2e21495f93f943b2797e132257\" returns successfully" Sep 12 17:22:59.808634 kubelet[3021]: E0912 17:22:59.807842 3021 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-a-9410d45923\" not found" node="ci-4426.1.0-a-9410d45923" Sep 12 17:22:59.810392 kubelet[3021]: E0912 17:22:59.810354 3021 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-a-9410d45923\" not found" node="ci-4426.1.0-a-9410d45923" Sep 12 17:22:59.814028 kubelet[3021]: E0912 17:22:59.813939 3021 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-a-9410d45923\" not found" node="ci-4426.1.0-a-9410d45923" Sep 12 17:23:00.314957 kubelet[3021]: I0912 17:23:00.314130 3021 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.1.0-a-9410d45923" Sep 12 17:23:00.818038 kubelet[3021]: E0912 17:23:00.817994 3021 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-a-9410d45923\" not found" node="ci-4426.1.0-a-9410d45923" Sep 12 17:23:00.818574 kubelet[3021]: E0912 17:23:00.816457 3021 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.1.0-a-9410d45923\" not found" node="ci-4426.1.0-a-9410d45923" Sep 12 17:23:00.924136 kubelet[3021]: E0912 17:23:00.924094 3021 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4426.1.0-a-9410d45923\" not found" node="ci-4426.1.0-a-9410d45923" Sep 12 17:23:00.979958 kubelet[3021]: I0912 17:23:00.979921 3021 kubelet_node_status.go:78] "Successfully registered node" node="ci-4426.1.0-a-9410d45923" Sep 12 17:23:01.039891 kubelet[3021]: I0912 17:23:01.039852 3021 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4426.1.0-a-9410d45923" Sep 12 17:23:01.066846 kubelet[3021]: E0912 17:23:01.066751 3021 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4426.1.0-a-9410d45923.186498d12b78f214 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4426.1.0-a-9410d45923,UID:ci-4426.1.0-a-9410d45923,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4426.1.0-a-9410d45923,},FirstTimestamp:2025-09-12 17:22:58.72626946 +0000 UTC m=+0.595306386,LastTimestamp:2025-09-12 17:22:58.72626946 +0000 UTC m=+0.595306386,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4426.1.0-a-9410d45923,}" Sep 12 17:23:01.107675 kubelet[3021]: E0912 17:23:01.106755 3021 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4426.1.0-a-9410d45923\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4426.1.0-a-9410d45923" Sep 12 17:23:01.107675 kubelet[3021]: I0912 17:23:01.106789 3021 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4426.1.0-a-9410d45923" Sep 12 17:23:01.108424 kubelet[3021]: E0912 17:23:01.108359 3021 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4426.1.0-a-9410d45923\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4426.1.0-a-9410d45923" Sep 12 17:23:01.108424 kubelet[3021]: I0912 17:23:01.108380 3021 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4426.1.0-a-9410d45923" Sep 12 17:23:01.109608 kubelet[3021]: E0912 17:23:01.109586 3021 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4426.1.0-a-9410d45923\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4426.1.0-a-9410d45923" Sep 12 17:23:01.725872 kubelet[3021]: I0912 17:23:01.725842 3021 apiserver.go:52] "Watching apiserver" Sep 12 17:23:01.740079 kubelet[3021]: I0912 17:23:01.740049 3021 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:23:02.735299 kubelet[3021]: I0912 17:23:02.735002 3021 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4426.1.0-a-9410d45923" Sep 12 17:23:02.745569 kubelet[3021]: W0912 17:23:02.745447 3021 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:23:03.180650 systemd[1]: Reload requested from client PID 3287 ('systemctl') (unit session-9.scope)... Sep 12 17:23:03.180666 systemd[1]: Reloading... Sep 12 17:23:03.263491 zram_generator::config[3337]: No configuration found. Sep 12 17:23:03.418826 systemd[1]: Reloading finished in 237 ms. Sep 12 17:23:03.455110 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:23:03.466824 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:23:03.467004 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:23:03.467041 systemd[1]: kubelet.service: Consumed 856ms CPU time, 126.8M memory peak. Sep 12 17:23:03.468845 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:23:03.568521 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:23:03.575811 (kubelet)[3398]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:23:03.607263 kubelet[3398]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:23:03.607263 kubelet[3398]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:23:03.607263 kubelet[3398]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:23:03.608505 kubelet[3398]: I0912 17:23:03.607653 3398 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:23:03.614241 kubelet[3398]: I0912 17:23:03.614213 3398 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 17:23:03.614241 kubelet[3398]: I0912 17:23:03.614233 3398 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:23:03.614412 kubelet[3398]: I0912 17:23:03.614394 3398 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 17:23:03.615408 kubelet[3398]: I0912 17:23:03.615391 3398 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:23:03.691343 kubelet[3398]: I0912 17:23:03.691292 3398 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:23:03.698495 kubelet[3398]: I0912 17:23:03.698097 3398 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:23:03.701103 kubelet[3398]: I0912 17:23:03.701085 3398 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:23:03.701259 kubelet[3398]: I0912 17:23:03.701237 3398 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:23:03.701389 kubelet[3398]: I0912 17:23:03.701258 3398 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4426.1.0-a-9410d45923","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:23:03.701457 kubelet[3398]: I0912 17:23:03.701412 3398 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:23:03.701457 kubelet[3398]: I0912 17:23:03.701426 3398 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 17:23:03.701520 kubelet[3398]: I0912 17:23:03.701483 3398 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:23:03.701950 kubelet[3398]: I0912 17:23:03.701583 3398 kubelet.go:446] "Attempting to sync node with API server" Sep 12 17:23:03.701950 kubelet[3398]: I0912 17:23:03.701594 3398 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:23:03.701950 kubelet[3398]: I0912 17:23:03.701630 3398 kubelet.go:352] "Adding apiserver pod source" Sep 12 17:23:03.701950 kubelet[3398]: I0912 17:23:03.701639 3398 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:23:03.702270 kubelet[3398]: I0912 17:23:03.702257 3398 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:23:03.702677 kubelet[3398]: I0912 17:23:03.702662 3398 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:23:03.703107 kubelet[3398]: I0912 17:23:03.703090 3398 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:23:03.703190 kubelet[3398]: I0912 17:23:03.703183 3398 server.go:1287] "Started kubelet" Sep 12 17:23:03.706763 kubelet[3398]: I0912 17:23:03.706405 3398 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:23:03.711954 kubelet[3398]: I0912 17:23:03.710326 3398 server.go:479] "Adding debug handlers to kubelet server" Sep 12 17:23:03.713495 kubelet[3398]: I0912 17:23:03.713416 3398 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:23:03.715653 kubelet[3398]: I0912 17:23:03.715631 3398 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:23:03.716426 kubelet[3398]: I0912 17:23:03.716406 3398 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:23:03.721454 kubelet[3398]: I0912 17:23:03.717658 3398 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:23:03.721454 kubelet[3398]: I0912 17:23:03.719076 3398 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:23:03.722361 kubelet[3398]: E0912 17:23:03.722092 3398 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.1.0-a-9410d45923\" not found" Sep 12 17:23:03.729788 kubelet[3398]: I0912 17:23:03.729772 3398 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:23:03.729963 kubelet[3398]: I0912 17:23:03.729947 3398 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:23:03.732320 kubelet[3398]: E0912 17:23:03.732304 3398 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:23:03.732546 kubelet[3398]: I0912 17:23:03.732532 3398 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:23:03.734003 kubelet[3398]: I0912 17:23:03.733759 3398 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:23:03.734003 kubelet[3398]: I0912 17:23:03.733857 3398 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:23:03.737040 kubelet[3398]: I0912 17:23:03.737003 3398 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:23:03.738188 kubelet[3398]: I0912 17:23:03.738158 3398 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:23:03.738188 kubelet[3398]: I0912 17:23:03.738183 3398 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 17:23:03.738278 kubelet[3398]: I0912 17:23:03.738203 3398 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:23:03.738278 kubelet[3398]: I0912 17:23:03.738208 3398 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 17:23:03.738278 kubelet[3398]: E0912 17:23:03.738247 3398 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:23:03.770918 kubelet[3398]: I0912 17:23:03.770841 3398 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:23:03.770918 kubelet[3398]: I0912 17:23:03.770857 3398 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:23:03.771161 kubelet[3398]: I0912 17:23:03.770990 3398 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:23:03.771320 kubelet[3398]: I0912 17:23:03.771307 3398 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:23:03.771378 kubelet[3398]: I0912 17:23:03.771362 3398 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:23:03.771418 kubelet[3398]: I0912 17:23:03.771413 3398 policy_none.go:49] "None policy: Start" Sep 12 17:23:03.771483 kubelet[3398]: I0912 17:23:03.771458 3398 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:23:03.771538 kubelet[3398]: I0912 17:23:03.771531 3398 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:23:03.771665 kubelet[3398]: I0912 17:23:03.771655 3398 state_mem.go:75] "Updated machine memory state" Sep 12 17:23:03.774927 kubelet[3398]: I0912 17:23:03.774913 3398 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:23:03.776112 kubelet[3398]: I0912 17:23:03.775116 3398 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:23:03.776112 kubelet[3398]: I0912 17:23:03.775129 3398 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:23:03.776112 kubelet[3398]: I0912 17:23:03.775280 3398 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:23:03.778049 kubelet[3398]: E0912 17:23:03.778034 3398 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:23:03.839322 kubelet[3398]: I0912 17:23:03.839288 3398 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4426.1.0-a-9410d45923" Sep 12 17:23:03.839558 kubelet[3398]: I0912 17:23:03.839331 3398 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4426.1.0-a-9410d45923" Sep 12 17:23:03.839679 kubelet[3398]: I0912 17:23:03.839421 3398 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4426.1.0-a-9410d45923" Sep 12 17:23:03.867820 kubelet[3398]: W0912 17:23:03.867771 3398 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:23:03.871562 kubelet[3398]: W0912 17:23:03.871507 3398 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:23:03.874113 kubelet[3398]: W0912 17:23:03.874080 3398 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:23:03.874284 kubelet[3398]: E0912 17:23:03.874239 3398 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4426.1.0-a-9410d45923\" already exists" pod="kube-system/kube-controller-manager-ci-4426.1.0-a-9410d45923" Sep 12 17:23:03.877528 kubelet[3398]: I0912 17:23:03.877507 3398 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.1.0-a-9410d45923" Sep 12 17:23:03.898619 kubelet[3398]: I0912 17:23:03.898583 3398 kubelet_node_status.go:124] "Node was previously registered" node="ci-4426.1.0-a-9410d45923" Sep 12 17:23:03.898719 kubelet[3398]: I0912 17:23:03.898696 3398 kubelet_node_status.go:78] "Successfully registered node" node="ci-4426.1.0-a-9410d45923" Sep 12 17:23:03.935458 kubelet[3398]: I0912 17:23:03.935415 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6a66a313f9d444eb5a1427726a96d703-k8s-certs\") pod \"kube-apiserver-ci-4426.1.0-a-9410d45923\" (UID: \"6a66a313f9d444eb5a1427726a96d703\") " pod="kube-system/kube-apiserver-ci-4426.1.0-a-9410d45923" Sep 12 17:23:03.935458 kubelet[3398]: I0912 17:23:03.935455 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d0b477100288d6664a738b0c3a1523fa-ca-certs\") pod \"kube-controller-manager-ci-4426.1.0-a-9410d45923\" (UID: \"d0b477100288d6664a738b0c3a1523fa\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-9410d45923" Sep 12 17:23:03.935613 kubelet[3398]: I0912 17:23:03.935579 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0b477100288d6664a738b0c3a1523fa-kubeconfig\") pod \"kube-controller-manager-ci-4426.1.0-a-9410d45923\" (UID: \"d0b477100288d6664a738b0c3a1523fa\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-9410d45923" Sep 12 17:23:03.935613 kubelet[3398]: I0912 17:23:03.935601 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d0b477100288d6664a738b0c3a1523fa-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4426.1.0-a-9410d45923\" (UID: \"d0b477100288d6664a738b0c3a1523fa\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-9410d45923" Sep 12 17:23:03.935648 kubelet[3398]: I0912 17:23:03.935614 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/45ec4ff6a682c61e6a803d9cb0dd0891-kubeconfig\") pod \"kube-scheduler-ci-4426.1.0-a-9410d45923\" (UID: \"45ec4ff6a682c61e6a803d9cb0dd0891\") " pod="kube-system/kube-scheduler-ci-4426.1.0-a-9410d45923" Sep 12 17:23:03.935648 kubelet[3398]: I0912 17:23:03.935626 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6a66a313f9d444eb5a1427726a96d703-ca-certs\") pod \"kube-apiserver-ci-4426.1.0-a-9410d45923\" (UID: \"6a66a313f9d444eb5a1427726a96d703\") " pod="kube-system/kube-apiserver-ci-4426.1.0-a-9410d45923" Sep 12 17:23:03.935796 kubelet[3398]: I0912 17:23:03.935739 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6a66a313f9d444eb5a1427726a96d703-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4426.1.0-a-9410d45923\" (UID: \"6a66a313f9d444eb5a1427726a96d703\") " pod="kube-system/kube-apiserver-ci-4426.1.0-a-9410d45923" Sep 12 17:23:03.935796 kubelet[3398]: I0912 17:23:03.935762 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d0b477100288d6664a738b0c3a1523fa-flexvolume-dir\") pod \"kube-controller-manager-ci-4426.1.0-a-9410d45923\" (UID: \"d0b477100288d6664a738b0c3a1523fa\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-9410d45923" Sep 12 17:23:03.935796 kubelet[3398]: I0912 17:23:03.935775 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d0b477100288d6664a738b0c3a1523fa-k8s-certs\") pod \"kube-controller-manager-ci-4426.1.0-a-9410d45923\" (UID: \"d0b477100288d6664a738b0c3a1523fa\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-9410d45923" Sep 12 17:23:04.711864 kubelet[3398]: I0912 17:23:04.711593 3398 apiserver.go:52] "Watching apiserver" Sep 12 17:23:04.734844 kubelet[3398]: I0912 17:23:04.734797 3398 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:23:04.757829 kubelet[3398]: I0912 17:23:04.756003 3398 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4426.1.0-a-9410d45923" Sep 12 17:23:04.757829 kubelet[3398]: I0912 17:23:04.756185 3398 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4426.1.0-a-9410d45923" Sep 12 17:23:04.772412 kubelet[3398]: W0912 17:23:04.772391 3398 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:23:04.772604 kubelet[3398]: E0912 17:23:04.772589 3398 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4426.1.0-a-9410d45923\" already exists" pod="kube-system/kube-scheduler-ci-4426.1.0-a-9410d45923" Sep 12 17:23:04.777997 kubelet[3398]: W0912 17:23:04.777933 3398 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:23:04.778306 kubelet[3398]: E0912 17:23:04.778251 3398 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4426.1.0-a-9410d45923\" already exists" pod="kube-system/kube-apiserver-ci-4426.1.0-a-9410d45923" Sep 12 17:23:04.788936 kubelet[3398]: I0912 17:23:04.788882 3398 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4426.1.0-a-9410d45923" podStartSLOduration=1.7888686 podStartE2EDuration="1.7888686s" podCreationTimestamp="2025-09-12 17:23:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:23:04.778232555 +0000 UTC m=+1.199150395" watchObservedRunningTime="2025-09-12 17:23:04.7888686 +0000 UTC m=+1.209786448" Sep 12 17:23:04.802146 kubelet[3398]: I0912 17:23:04.801965 3398 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4426.1.0-a-9410d45923" podStartSLOduration=1.801953124 podStartE2EDuration="1.801953124s" podCreationTimestamp="2025-09-12 17:23:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:23:04.789091793 +0000 UTC m=+1.210009633" watchObservedRunningTime="2025-09-12 17:23:04.801953124 +0000 UTC m=+1.222870964" Sep 12 17:23:04.829167 kubelet[3398]: I0912 17:23:04.829126 3398 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4426.1.0-a-9410d45923" podStartSLOduration=2.829111755 podStartE2EDuration="2.829111755s" podCreationTimestamp="2025-09-12 17:23:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:23:04.802055472 +0000 UTC m=+1.222973320" watchObservedRunningTime="2025-09-12 17:23:04.829111755 +0000 UTC m=+1.250029595" Sep 12 17:23:08.536024 kubelet[3398]: I0912 17:23:08.535910 3398 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:23:08.536989 containerd[1875]: time="2025-09-12T17:23:08.536575702Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:23:08.537203 kubelet[3398]: I0912 17:23:08.536827 3398 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:23:09.470419 systemd[1]: Created slice kubepods-besteffort-pod866bac59_a217_4ec9_8e5c_cb48d4d8113e.slice - libcontainer container kubepods-besteffort-pod866bac59_a217_4ec9_8e5c_cb48d4d8113e.slice. Sep 12 17:23:09.566436 kubelet[3398]: I0912 17:23:09.566407 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/866bac59-a217-4ec9-8e5c-cb48d4d8113e-xtables-lock\") pod \"kube-proxy-g7t7g\" (UID: \"866bac59-a217-4ec9-8e5c-cb48d4d8113e\") " pod="kube-system/kube-proxy-g7t7g" Sep 12 17:23:09.566436 kubelet[3398]: I0912 17:23:09.566438 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/866bac59-a217-4ec9-8e5c-cb48d4d8113e-kube-proxy\") pod \"kube-proxy-g7t7g\" (UID: \"866bac59-a217-4ec9-8e5c-cb48d4d8113e\") " pod="kube-system/kube-proxy-g7t7g" Sep 12 17:23:09.566762 kubelet[3398]: I0912 17:23:09.566452 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/866bac59-a217-4ec9-8e5c-cb48d4d8113e-lib-modules\") pod \"kube-proxy-g7t7g\" (UID: \"866bac59-a217-4ec9-8e5c-cb48d4d8113e\") " pod="kube-system/kube-proxy-g7t7g" Sep 12 17:23:09.566762 kubelet[3398]: I0912 17:23:09.566470 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2qrm\" (UniqueName: \"kubernetes.io/projected/866bac59-a217-4ec9-8e5c-cb48d4d8113e-kube-api-access-x2qrm\") pod \"kube-proxy-g7t7g\" (UID: \"866bac59-a217-4ec9-8e5c-cb48d4d8113e\") " pod="kube-system/kube-proxy-g7t7g" Sep 12 17:23:09.650088 systemd[1]: Created slice kubepods-besteffort-pod460c18e2_4af0_422d_87aa_a9f3fd5d5539.slice - libcontainer container kubepods-besteffort-pod460c18e2_4af0_422d_87aa_a9f3fd5d5539.slice. Sep 12 17:23:09.667161 kubelet[3398]: I0912 17:23:09.667095 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/460c18e2-4af0-422d-87aa-a9f3fd5d5539-var-lib-calico\") pod \"tigera-operator-755d956888-g9gxk\" (UID: \"460c18e2-4af0-422d-87aa-a9f3fd5d5539\") " pod="tigera-operator/tigera-operator-755d956888-g9gxk" Sep 12 17:23:09.667161 kubelet[3398]: I0912 17:23:09.667123 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grc7f\" (UniqueName: \"kubernetes.io/projected/460c18e2-4af0-422d-87aa-a9f3fd5d5539-kube-api-access-grc7f\") pod \"tigera-operator-755d956888-g9gxk\" (UID: \"460c18e2-4af0-422d-87aa-a9f3fd5d5539\") " pod="tigera-operator/tigera-operator-755d956888-g9gxk" Sep 12 17:23:09.785396 containerd[1875]: time="2025-09-12T17:23:09.785309760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g7t7g,Uid:866bac59-a217-4ec9-8e5c-cb48d4d8113e,Namespace:kube-system,Attempt:0,}" Sep 12 17:23:09.821628 containerd[1875]: time="2025-09-12T17:23:09.821567216Z" level=info msg="connecting to shim f86525ad8c8756c2385ad2261a16f9a0e0173fd2879ec618f758ac52fb40796f" address="unix:///run/containerd/s/46d488008e6f12ee97a3bc7cae8c6078ae2d786533e0e0960c96afde6a8e7198" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:23:09.838585 systemd[1]: Started cri-containerd-f86525ad8c8756c2385ad2261a16f9a0e0173fd2879ec618f758ac52fb40796f.scope - libcontainer container f86525ad8c8756c2385ad2261a16f9a0e0173fd2879ec618f758ac52fb40796f. Sep 12 17:23:09.859182 containerd[1875]: time="2025-09-12T17:23:09.859095074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g7t7g,Uid:866bac59-a217-4ec9-8e5c-cb48d4d8113e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f86525ad8c8756c2385ad2261a16f9a0e0173fd2879ec618f758ac52fb40796f\"" Sep 12 17:23:09.862478 containerd[1875]: time="2025-09-12T17:23:09.862372617Z" level=info msg="CreateContainer within sandbox \"f86525ad8c8756c2385ad2261a16f9a0e0173fd2879ec618f758ac52fb40796f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:23:09.886344 containerd[1875]: time="2025-09-12T17:23:09.886318291Z" level=info msg="Container a0f42639eb9ee8e1b41967c2337faf97283243ca0b426988bcc356a406141bc3: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:23:09.906163 containerd[1875]: time="2025-09-12T17:23:09.906090475Z" level=info msg="CreateContainer within sandbox \"f86525ad8c8756c2385ad2261a16f9a0e0173fd2879ec618f758ac52fb40796f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a0f42639eb9ee8e1b41967c2337faf97283243ca0b426988bcc356a406141bc3\"" Sep 12 17:23:09.907693 containerd[1875]: time="2025-09-12T17:23:09.907669553Z" level=info msg="StartContainer for \"a0f42639eb9ee8e1b41967c2337faf97283243ca0b426988bcc356a406141bc3\"" Sep 12 17:23:09.909075 containerd[1875]: time="2025-09-12T17:23:09.909049878Z" level=info msg="connecting to shim a0f42639eb9ee8e1b41967c2337faf97283243ca0b426988bcc356a406141bc3" address="unix:///run/containerd/s/46d488008e6f12ee97a3bc7cae8c6078ae2d786533e0e0960c96afde6a8e7198" protocol=ttrpc version=3 Sep 12 17:23:09.923599 systemd[1]: Started cri-containerd-a0f42639eb9ee8e1b41967c2337faf97283243ca0b426988bcc356a406141bc3.scope - libcontainer container a0f42639eb9ee8e1b41967c2337faf97283243ca0b426988bcc356a406141bc3. Sep 12 17:23:09.954461 containerd[1875]: time="2025-09-12T17:23:09.954387688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-g9gxk,Uid:460c18e2-4af0-422d-87aa-a9f3fd5d5539,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:23:09.955153 containerd[1875]: time="2025-09-12T17:23:09.955120268Z" level=info msg="StartContainer for \"a0f42639eb9ee8e1b41967c2337faf97283243ca0b426988bcc356a406141bc3\" returns successfully" Sep 12 17:23:09.997928 containerd[1875]: time="2025-09-12T17:23:09.997601870Z" level=info msg="connecting to shim 8c1b39f853fff8c726b383accc6cd939f0dcf87ef858873ac28b35197ed49a38" address="unix:///run/containerd/s/08f1f04ae793a9ae82debcd66ecb28a66dc129cb936519237f9923fdb5ce3a06" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:23:10.014612 systemd[1]: Started cri-containerd-8c1b39f853fff8c726b383accc6cd939f0dcf87ef858873ac28b35197ed49a38.scope - libcontainer container 8c1b39f853fff8c726b383accc6cd939f0dcf87ef858873ac28b35197ed49a38. Sep 12 17:23:10.046459 containerd[1875]: time="2025-09-12T17:23:10.046406054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-g9gxk,Uid:460c18e2-4af0-422d-87aa-a9f3fd5d5539,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8c1b39f853fff8c726b383accc6cd939f0dcf87ef858873ac28b35197ed49a38\"" Sep 12 17:23:10.048326 containerd[1875]: time="2025-09-12T17:23:10.048308280Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:23:10.676287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1135373695.mount: Deactivated successfully. Sep 12 17:23:10.779985 kubelet[3398]: I0912 17:23:10.779807 3398 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g7t7g" podStartSLOduration=1.779791173 podStartE2EDuration="1.779791173s" podCreationTimestamp="2025-09-12 17:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:23:10.779654016 +0000 UTC m=+7.200571856" watchObservedRunningTime="2025-09-12 17:23:10.779791173 +0000 UTC m=+7.200709013" Sep 12 17:23:11.339678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2517416051.mount: Deactivated successfully. Sep 12 17:23:12.078501 containerd[1875]: time="2025-09-12T17:23:12.078308357Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:12.081713 containerd[1875]: time="2025-09-12T17:23:12.081683472Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 12 17:23:12.084983 containerd[1875]: time="2025-09-12T17:23:12.084945007Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:12.089498 containerd[1875]: time="2025-09-12T17:23:12.088961739Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:12.089498 containerd[1875]: time="2025-09-12T17:23:12.089391723Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.041001024s" Sep 12 17:23:12.089498 containerd[1875]: time="2025-09-12T17:23:12.089415308Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 12 17:23:12.092607 containerd[1875]: time="2025-09-12T17:23:12.092587072Z" level=info msg="CreateContainer within sandbox \"8c1b39f853fff8c726b383accc6cd939f0dcf87ef858873ac28b35197ed49a38\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:23:12.112885 containerd[1875]: time="2025-09-12T17:23:12.112860643Z" level=info msg="Container 00f146af0bb2f60249466214fdfc548b51b64982f3b9cc86a89a62b9129425d4: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:23:12.125491 containerd[1875]: time="2025-09-12T17:23:12.125443932Z" level=info msg="CreateContainer within sandbox \"8c1b39f853fff8c726b383accc6cd939f0dcf87ef858873ac28b35197ed49a38\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"00f146af0bb2f60249466214fdfc548b51b64982f3b9cc86a89a62b9129425d4\"" Sep 12 17:23:12.126122 containerd[1875]: time="2025-09-12T17:23:12.126101333Z" level=info msg="StartContainer for \"00f146af0bb2f60249466214fdfc548b51b64982f3b9cc86a89a62b9129425d4\"" Sep 12 17:23:12.128178 containerd[1875]: time="2025-09-12T17:23:12.127910540Z" level=info msg="connecting to shim 00f146af0bb2f60249466214fdfc548b51b64982f3b9cc86a89a62b9129425d4" address="unix:///run/containerd/s/08f1f04ae793a9ae82debcd66ecb28a66dc129cb936519237f9923fdb5ce3a06" protocol=ttrpc version=3 Sep 12 17:23:12.148572 systemd[1]: Started cri-containerd-00f146af0bb2f60249466214fdfc548b51b64982f3b9cc86a89a62b9129425d4.scope - libcontainer container 00f146af0bb2f60249466214fdfc548b51b64982f3b9cc86a89a62b9129425d4. Sep 12 17:23:12.179489 containerd[1875]: time="2025-09-12T17:23:12.179048990Z" level=info msg="StartContainer for \"00f146af0bb2f60249466214fdfc548b51b64982f3b9cc86a89a62b9129425d4\" returns successfully" Sep 12 17:23:14.361964 kubelet[3398]: I0912 17:23:14.361906 3398 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-g9gxk" podStartSLOduration=3.31930042 podStartE2EDuration="5.361893506s" podCreationTimestamp="2025-09-12 17:23:09 +0000 UTC" firstStartedPulling="2025-09-12 17:23:10.047638054 +0000 UTC m=+6.468555894" lastFinishedPulling="2025-09-12 17:23:12.090231132 +0000 UTC m=+8.511148980" observedRunningTime="2025-09-12 17:23:12.797423331 +0000 UTC m=+9.218341171" watchObservedRunningTime="2025-09-12 17:23:14.361893506 +0000 UTC m=+10.782811346" Sep 12 17:23:17.237575 sudo[2374]: pam_unix(sudo:session): session closed for user root Sep 12 17:23:17.326578 sshd[2373]: Connection closed by 10.200.16.10 port 33834 Sep 12 17:23:17.327361 sshd-session[2370]: pam_unix(sshd:session): session closed for user core Sep 12 17:23:17.332049 systemd-logind[1846]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:23:17.333544 systemd[1]: sshd@6-10.200.20.11:22-10.200.16.10:33834.service: Deactivated successfully. Sep 12 17:23:17.336895 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:23:17.337040 systemd[1]: session-9.scope: Consumed 2.764s CPU time, 219.6M memory peak. Sep 12 17:23:17.338994 systemd-logind[1846]: Removed session 9. Sep 12 17:23:22.643767 systemd[1]: Created slice kubepods-besteffort-pod0897d410_5087_4930_92e5_c5d5473b7f73.slice - libcontainer container kubepods-besteffort-pod0897d410_5087_4930_92e5_c5d5473b7f73.slice. Sep 12 17:23:22.738005 kubelet[3398]: I0912 17:23:22.737958 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddrql\" (UniqueName: \"kubernetes.io/projected/0897d410-5087-4930-92e5-c5d5473b7f73-kube-api-access-ddrql\") pod \"calico-typha-5d9467df64-fgx9n\" (UID: \"0897d410-5087-4930-92e5-c5d5473b7f73\") " pod="calico-system/calico-typha-5d9467df64-fgx9n" Sep 12 17:23:22.738005 kubelet[3398]: I0912 17:23:22.738005 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0897d410-5087-4930-92e5-c5d5473b7f73-tigera-ca-bundle\") pod \"calico-typha-5d9467df64-fgx9n\" (UID: \"0897d410-5087-4930-92e5-c5d5473b7f73\") " pod="calico-system/calico-typha-5d9467df64-fgx9n" Sep 12 17:23:22.738563 kubelet[3398]: I0912 17:23:22.738018 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0897d410-5087-4930-92e5-c5d5473b7f73-typha-certs\") pod \"calico-typha-5d9467df64-fgx9n\" (UID: \"0897d410-5087-4930-92e5-c5d5473b7f73\") " pod="calico-system/calico-typha-5d9467df64-fgx9n" Sep 12 17:23:22.757853 systemd[1]: Created slice kubepods-besteffort-podd00d39d2_eeca_47fe_89b6_f719f043a8c0.slice - libcontainer container kubepods-besteffort-podd00d39d2_eeca_47fe_89b6_f719f043a8c0.slice. Sep 12 17:23:22.838475 kubelet[3398]: I0912 17:23:22.838421 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfr59\" (UniqueName: \"kubernetes.io/projected/d00d39d2-eeca-47fe-89b6-f719f043a8c0-kube-api-access-gfr59\") pod \"calico-node-5sf6x\" (UID: \"d00d39d2-eeca-47fe-89b6-f719f043a8c0\") " pod="calico-system/calico-node-5sf6x" Sep 12 17:23:22.838611 kubelet[3398]: I0912 17:23:22.838575 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d00d39d2-eeca-47fe-89b6-f719f043a8c0-cni-net-dir\") pod \"calico-node-5sf6x\" (UID: \"d00d39d2-eeca-47fe-89b6-f719f043a8c0\") " pod="calico-system/calico-node-5sf6x" Sep 12 17:23:22.838611 kubelet[3398]: I0912 17:23:22.838597 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d00d39d2-eeca-47fe-89b6-f719f043a8c0-node-certs\") pod \"calico-node-5sf6x\" (UID: \"d00d39d2-eeca-47fe-89b6-f719f043a8c0\") " pod="calico-system/calico-node-5sf6x" Sep 12 17:23:22.838656 kubelet[3398]: I0912 17:23:22.838611 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d00d39d2-eeca-47fe-89b6-f719f043a8c0-var-lib-calico\") pod \"calico-node-5sf6x\" (UID: \"d00d39d2-eeca-47fe-89b6-f719f043a8c0\") " pod="calico-system/calico-node-5sf6x" Sep 12 17:23:22.838656 kubelet[3398]: I0912 17:23:22.838622 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d00d39d2-eeca-47fe-89b6-f719f043a8c0-policysync\") pod \"calico-node-5sf6x\" (UID: \"d00d39d2-eeca-47fe-89b6-f719f043a8c0\") " pod="calico-system/calico-node-5sf6x" Sep 12 17:23:22.838656 kubelet[3398]: I0912 17:23:22.838632 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d00d39d2-eeca-47fe-89b6-f719f043a8c0-var-run-calico\") pod \"calico-node-5sf6x\" (UID: \"d00d39d2-eeca-47fe-89b6-f719f043a8c0\") " pod="calico-system/calico-node-5sf6x" Sep 12 17:23:22.838697 kubelet[3398]: I0912 17:23:22.838670 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d00d39d2-eeca-47fe-89b6-f719f043a8c0-tigera-ca-bundle\") pod \"calico-node-5sf6x\" (UID: \"d00d39d2-eeca-47fe-89b6-f719f043a8c0\") " pod="calico-system/calico-node-5sf6x" Sep 12 17:23:22.838715 kubelet[3398]: I0912 17:23:22.838680 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d00d39d2-eeca-47fe-89b6-f719f043a8c0-xtables-lock\") pod \"calico-node-5sf6x\" (UID: \"d00d39d2-eeca-47fe-89b6-f719f043a8c0\") " pod="calico-system/calico-node-5sf6x" Sep 12 17:23:22.838901 kubelet[3398]: I0912 17:23:22.838879 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d00d39d2-eeca-47fe-89b6-f719f043a8c0-cni-log-dir\") pod \"calico-node-5sf6x\" (UID: \"d00d39d2-eeca-47fe-89b6-f719f043a8c0\") " pod="calico-system/calico-node-5sf6x" Sep 12 17:23:22.838919 kubelet[3398]: I0912 17:23:22.838902 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d00d39d2-eeca-47fe-89b6-f719f043a8c0-flexvol-driver-host\") pod \"calico-node-5sf6x\" (UID: \"d00d39d2-eeca-47fe-89b6-f719f043a8c0\") " pod="calico-system/calico-node-5sf6x" Sep 12 17:23:22.838934 kubelet[3398]: I0912 17:23:22.838920 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d00d39d2-eeca-47fe-89b6-f719f043a8c0-cni-bin-dir\") pod \"calico-node-5sf6x\" (UID: \"d00d39d2-eeca-47fe-89b6-f719f043a8c0\") " pod="calico-system/calico-node-5sf6x" Sep 12 17:23:22.839584 kubelet[3398]: I0912 17:23:22.838931 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d00d39d2-eeca-47fe-89b6-f719f043a8c0-lib-modules\") pod \"calico-node-5sf6x\" (UID: \"d00d39d2-eeca-47fe-89b6-f719f043a8c0\") " pod="calico-system/calico-node-5sf6x" Sep 12 17:23:22.884296 kubelet[3398]: E0912 17:23:22.883909 3398 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vj2b6" podUID="91cf3b25-a4f8-46ef-a218-a6fd5f87b47a" Sep 12 17:23:22.940314 kubelet[3398]: I0912 17:23:22.939919 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/91cf3b25-a4f8-46ef-a218-a6fd5f87b47a-kubelet-dir\") pod \"csi-node-driver-vj2b6\" (UID: \"91cf3b25-a4f8-46ef-a218-a6fd5f87b47a\") " pod="calico-system/csi-node-driver-vj2b6" Sep 12 17:23:22.940314 kubelet[3398]: I0912 17:23:22.939979 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/91cf3b25-a4f8-46ef-a218-a6fd5f87b47a-registration-dir\") pod \"csi-node-driver-vj2b6\" (UID: \"91cf3b25-a4f8-46ef-a218-a6fd5f87b47a\") " pod="calico-system/csi-node-driver-vj2b6" Sep 12 17:23:22.940314 kubelet[3398]: I0912 17:23:22.939992 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/91cf3b25-a4f8-46ef-a218-a6fd5f87b47a-socket-dir\") pod \"csi-node-driver-vj2b6\" (UID: \"91cf3b25-a4f8-46ef-a218-a6fd5f87b47a\") " pod="calico-system/csi-node-driver-vj2b6" Sep 12 17:23:22.940314 kubelet[3398]: I0912 17:23:22.940024 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/91cf3b25-a4f8-46ef-a218-a6fd5f87b47a-varrun\") pod \"csi-node-driver-vj2b6\" (UID: \"91cf3b25-a4f8-46ef-a218-a6fd5f87b47a\") " pod="calico-system/csi-node-driver-vj2b6" Sep 12 17:23:22.940314 kubelet[3398]: I0912 17:23:22.940037 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66wlv\" (UniqueName: \"kubernetes.io/projected/91cf3b25-a4f8-46ef-a218-a6fd5f87b47a-kube-api-access-66wlv\") pod \"csi-node-driver-vj2b6\" (UID: \"91cf3b25-a4f8-46ef-a218-a6fd5f87b47a\") " pod="calico-system/csi-node-driver-vj2b6" Sep 12 17:23:22.942727 kubelet[3398]: E0912 17:23:22.942697 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:22.942912 kubelet[3398]: W0912 17:23:22.942887 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:22.942949 kubelet[3398]: E0912 17:23:22.942917 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:22.943159 kubelet[3398]: E0912 17:23:22.943142 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:22.943159 kubelet[3398]: W0912 17:23:22.943155 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:22.943220 kubelet[3398]: E0912 17:23:22.943166 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:22.943540 kubelet[3398]: E0912 17:23:22.943521 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:22.943540 kubelet[3398]: W0912 17:23:22.943535 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:22.943637 kubelet[3398]: E0912 17:23:22.943554 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:22.946036 kubelet[3398]: E0912 17:23:22.946014 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:22.946036 kubelet[3398]: W0912 17:23:22.946030 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:22.946149 kubelet[3398]: E0912 17:23:22.946070 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:22.946273 kubelet[3398]: E0912 17:23:22.946259 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:22.946273 kubelet[3398]: W0912 17:23:22.946271 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:22.946335 kubelet[3398]: E0912 17:23:22.946289 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:22.948820 kubelet[3398]: E0912 17:23:22.948684 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:22.948820 kubelet[3398]: W0912 17:23:22.948697 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:22.948820 kubelet[3398]: E0912 17:23:22.948719 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:22.948910 kubelet[3398]: E0912 17:23:22.948845 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:22.948910 kubelet[3398]: W0912 17:23:22.948852 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:22.948942 kubelet[3398]: E0912 17:23:22.948925 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:22.949225 kubelet[3398]: E0912 17:23:22.948967 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:22.949225 kubelet[3398]: W0912 17:23:22.948976 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:22.949225 kubelet[3398]: E0912 17:23:22.948987 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:22.949225 kubelet[3398]: E0912 17:23:22.949082 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:22.949225 kubelet[3398]: W0912 17:23:22.949087 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:22.949225 kubelet[3398]: E0912 17:23:22.949093 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:22.949225 kubelet[3398]: E0912 17:23:22.949204 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:22.949225 kubelet[3398]: W0912 17:23:22.949209 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:22.949225 kubelet[3398]: E0912 17:23:22.949214 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:22.955341 containerd[1875]: time="2025-09-12T17:23:22.954860897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d9467df64-fgx9n,Uid:0897d410-5087-4930-92e5-c5d5473b7f73,Namespace:calico-system,Attempt:0,}" Sep 12 17:23:22.961543 kubelet[3398]: E0912 17:23:22.961525 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:22.961543 kubelet[3398]: W0912 17:23:22.961538 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:22.961648 kubelet[3398]: E0912 17:23:22.961548 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:23.006684 containerd[1875]: time="2025-09-12T17:23:23.006645345Z" level=info msg="connecting to shim 031e2f61207d8d6c221e0414d8583c73b5ce609c8845950511adba3c9c022cc8" address="unix:///run/containerd/s/4c9ffd82b4cac731e79a5ef23ff4ae8b54741529ac08931e36273f6d2935dace" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:23:23.030604 systemd[1]: Started cri-containerd-031e2f61207d8d6c221e0414d8583c73b5ce609c8845950511adba3c9c022cc8.scope - libcontainer container 031e2f61207d8d6c221e0414d8583c73b5ce609c8845950511adba3c9c022cc8. Sep 12 17:23:23.041175 kubelet[3398]: E0912 17:23:23.041112 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:23.041688 kubelet[3398]: W0912 17:23:23.041519 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:23.041688 kubelet[3398]: E0912 17:23:23.041542 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:23.042278 kubelet[3398]: E0912 17:23:23.042212 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:23.042278 kubelet[3398]: W0912 17:23:23.042226 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:23.042551 kubelet[3398]: E0912 17:23:23.042539 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:23.042689 kubelet[3398]: W0912 17:23:23.042671 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:23.042847 kubelet[3398]: E0912 17:23:23.042754 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:23.042847 kubelet[3398]: E0912 17:23:23.042242 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:23.043667 kubelet[3398]: E0912 17:23:23.043503 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:23.043667 kubelet[3398]: W0912 17:23:23.043513 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:23.043667 kubelet[3398]: E0912 17:23:23.043529 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:23.043924 kubelet[3398]: E0912 17:23:23.043795 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:23.043924 kubelet[3398]: W0912 17:23:23.043806 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:23.043924 kubelet[3398]: E0912 17:23:23.043819 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:23.044114 kubelet[3398]: E0912 17:23:23.044096 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:23.044231 kubelet[3398]: W0912 17:23:23.044106 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:23.044231 kubelet[3398]: E0912 17:23:23.044178 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:23.044403 kubelet[3398]: E0912 17:23:23.044395 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:23.044591 kubelet[3398]: W0912 17:23:23.044451 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:23.044591 kubelet[3398]: E0912 17:23:23.044494 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:23.045135 kubelet[3398]: E0912 17:23:23.045118 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:23.045243 kubelet[3398]: W0912 17:23:23.045199 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:23.045379 kubelet[3398]: E0912 17:23:23.045287 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:23.045806 kubelet[3398]: E0912 17:23:23.045782 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:23.045806 kubelet[3398]: W0912 17:23:23.045794 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:23.045973 kubelet[3398]: E0912 17:23:23.045955 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:23.046227 kubelet[3398]: E0912 17:23:23.046214 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:23.046316 kubelet[3398]: W0912 17:23:23.046296 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:23.046453 kubelet[3398]: E0912 17:23:23.046431 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:23.046824 kubelet[3398]: E0912 17:23:23.046811 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:23.046970 kubelet[3398]: W0912 17:23:23.046888 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:23.046970 kubelet[3398]: E0912 17:23:23.046906 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:23.047313 kubelet[3398]: E0912 17:23:23.047284 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:23.047313 kubelet[3398]: W0912 17:23:23.047301 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:23.047518 kubelet[3398]: E0912 17:23:23.047448 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:23.047837 kubelet[3398]: E0912 17:23:23.047797 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:23.047837 kubelet[3398]: W0912 17:23:23.047808 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:23.047989 kubelet[3398]: E0912 17:23:23.047979 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:23.048268 kubelet[3398]: E0912 17:23:23.048228 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:23.048268 kubelet[3398]: W0912 17:23:23.048252 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:23.048459 kubelet[3398]: E0912 17:23:23.048440 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:23.048832 kubelet[3398]: E0912 17:23:23.048797 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:23.048832 kubelet[3398]: W0912 17:23:23.048811 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:23.049017 kubelet[3398]: E0912 17:23:23.048999 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:23.049186 kubelet[3398]: E0912 17:23:23.049164 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:23.049186 kubelet[3398]: W0912 17:23:23.049174 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:23.049311 kubelet[3398]: E0912 17:23:23.049276 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:23.049623 kubelet[3398]: E0912 17:23:23.049601 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:23.049623 kubelet[3398]: W0912 17:23:23.049611 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:23.049867 kubelet[3398]: E0912 17:23:23.049841 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:23.050432 kubelet[3398]: E0912 17:23:23.050402 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:23.050563 kubelet[3398]: W0912 17:23:23.050508 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:23.050753 kubelet[3398]: E0912 17:23:23.050683 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:23.050997 kubelet[3398]: E0912 17:23:23.050962 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:23.050997 kubelet[3398]: W0912 17:23:23.050973 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:23.051157 kubelet[3398]: E0912 17:23:23.051122 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:23.052023 kubelet[3398]: E0912 17:23:23.051994 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:23.052023 kubelet[3398]: W0912 17:23:23.052010 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:23.052178 kubelet[3398]: E0912 17:23:23.052168 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:23.052443 kubelet[3398]: E0912 17:23:23.052418 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:23.052443 kubelet[3398]: W0912 17:23:23.052430 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:23.052611 kubelet[3398]: E0912 17:23:23.052600 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:23.052800 kubelet[3398]: E0912 17:23:23.052778 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:23.052800 kubelet[3398]: W0912 17:23:23.052788 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:23.052975 kubelet[3398]: E0912 17:23:23.052939 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:23.053175 kubelet[3398]: E0912 17:23:23.053153 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:23.053175 kubelet[3398]: W0912 17:23:23.053164 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:23.053393 kubelet[3398]: E0912 17:23:23.053272 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:23.053538 kubelet[3398]: E0912 17:23:23.053525 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:23.053623 kubelet[3398]: W0912 17:23:23.053601 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:23.053737 kubelet[3398]: E0912 17:23:23.053683 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:23.054180 kubelet[3398]: E0912 17:23:23.054146 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:23.054486 kubelet[3398]: W0912 17:23:23.054241 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:23.054486 kubelet[3398]: E0912 17:23:23.054256 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:23.061554 containerd[1875]: time="2025-09-12T17:23:23.061507314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5sf6x,Uid:d00d39d2-eeca-47fe-89b6-f719f043a8c0,Namespace:calico-system,Attempt:0,}" Sep 12 17:23:23.063160 kubelet[3398]: E0912 17:23:23.063143 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:23.063258 kubelet[3398]: W0912 17:23:23.063217 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:23.063258 kubelet[3398]: E0912 17:23:23.063233 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:23.081742 containerd[1875]: time="2025-09-12T17:23:23.081706561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d9467df64-fgx9n,Uid:0897d410-5087-4930-92e5-c5d5473b7f73,Namespace:calico-system,Attempt:0,} returns sandbox id \"031e2f61207d8d6c221e0414d8583c73b5ce609c8845950511adba3c9c022cc8\"" Sep 12 17:23:23.083645 containerd[1875]: time="2025-09-12T17:23:23.083560523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:23:23.116261 containerd[1875]: time="2025-09-12T17:23:23.116229535Z" level=info msg="connecting to shim e6d0eb5b49267305034c9fa532e0f81bdfd34c633cb9a7a2a689dcfe3bb068ca" address="unix:///run/containerd/s/58cc87a624af22830244b1526b09dba84550fd158457e0819655413e394c59a1" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:23:23.134598 systemd[1]: Started cri-containerd-e6d0eb5b49267305034c9fa532e0f81bdfd34c633cb9a7a2a689dcfe3bb068ca.scope - libcontainer container e6d0eb5b49267305034c9fa532e0f81bdfd34c633cb9a7a2a689dcfe3bb068ca. Sep 12 17:23:23.161196 containerd[1875]: time="2025-09-12T17:23:23.161158615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5sf6x,Uid:d00d39d2-eeca-47fe-89b6-f719f043a8c0,Namespace:calico-system,Attempt:0,} returns sandbox id \"e6d0eb5b49267305034c9fa532e0f81bdfd34c633cb9a7a2a689dcfe3bb068ca\"" Sep 12 17:23:24.362044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2070918754.mount: Deactivated successfully. Sep 12 17:23:24.740177 kubelet[3398]: E0912 17:23:24.739344 3398 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vj2b6" podUID="91cf3b25-a4f8-46ef-a218-a6fd5f87b47a" Sep 12 17:23:25.246285 containerd[1875]: time="2025-09-12T17:23:25.246240329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:25.248941 containerd[1875]: time="2025-09-12T17:23:25.248906043Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 12 17:23:25.251381 containerd[1875]: time="2025-09-12T17:23:25.251355668Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:25.255775 containerd[1875]: time="2025-09-12T17:23:25.255740801Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:25.257943 containerd[1875]: time="2025-09-12T17:23:25.257879910Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.174114835s" Sep 12 17:23:25.257943 containerd[1875]: time="2025-09-12T17:23:25.257923687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 12 17:23:25.261394 containerd[1875]: time="2025-09-12T17:23:25.261348703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:23:25.270928 containerd[1875]: time="2025-09-12T17:23:25.270898408Z" level=info msg="CreateContainer within sandbox \"031e2f61207d8d6c221e0414d8583c73b5ce609c8845950511adba3c9c022cc8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:23:25.291494 containerd[1875]: time="2025-09-12T17:23:25.289507688Z" level=info msg="Container f27e6fc4bbea9da3f768e62b75f9aac6f97877a939185a628e6c07e9411e92b2: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:23:25.311283 containerd[1875]: time="2025-09-12T17:23:25.311250716Z" level=info msg="CreateContainer within sandbox \"031e2f61207d8d6c221e0414d8583c73b5ce609c8845950511adba3c9c022cc8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f27e6fc4bbea9da3f768e62b75f9aac6f97877a939185a628e6c07e9411e92b2\"" Sep 12 17:23:25.311904 containerd[1875]: time="2025-09-12T17:23:25.311862972Z" level=info msg="StartContainer for \"f27e6fc4bbea9da3f768e62b75f9aac6f97877a939185a628e6c07e9411e92b2\"" Sep 12 17:23:25.312890 containerd[1875]: time="2025-09-12T17:23:25.312793593Z" level=info msg="connecting to shim f27e6fc4bbea9da3f768e62b75f9aac6f97877a939185a628e6c07e9411e92b2" address="unix:///run/containerd/s/4c9ffd82b4cac731e79a5ef23ff4ae8b54741529ac08931e36273f6d2935dace" protocol=ttrpc version=3 Sep 12 17:23:25.331611 systemd[1]: Started cri-containerd-f27e6fc4bbea9da3f768e62b75f9aac6f97877a939185a628e6c07e9411e92b2.scope - libcontainer container f27e6fc4bbea9da3f768e62b75f9aac6f97877a939185a628e6c07e9411e92b2. Sep 12 17:23:25.363381 containerd[1875]: time="2025-09-12T17:23:25.363345121Z" level=info msg="StartContainer for \"f27e6fc4bbea9da3f768e62b75f9aac6f97877a939185a628e6c07e9411e92b2\" returns successfully" Sep 12 17:23:25.816789 kubelet[3398]: I0912 17:23:25.816646 3398 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5d9467df64-fgx9n" podStartSLOduration=1.6396971809999998 podStartE2EDuration="3.816632415s" podCreationTimestamp="2025-09-12 17:23:22 +0000 UTC" firstStartedPulling="2025-09-12 17:23:23.083057831 +0000 UTC m=+19.503975671" lastFinishedPulling="2025-09-12 17:23:25.259993065 +0000 UTC m=+21.680910905" observedRunningTime="2025-09-12 17:23:25.816180021 +0000 UTC m=+22.237097893" watchObservedRunningTime="2025-09-12 17:23:25.816632415 +0000 UTC m=+22.237550255" Sep 12 17:23:25.841355 kubelet[3398]: E0912 17:23:25.841280 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.841355 kubelet[3398]: W0912 17:23:25.841301 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.841355 kubelet[3398]: E0912 17:23:25.841318 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:25.842240 kubelet[3398]: E0912 17:23:25.841700 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.842240 kubelet[3398]: W0912 17:23:25.842145 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.842240 kubelet[3398]: E0912 17:23:25.842195 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:25.842474 kubelet[3398]: E0912 17:23:25.842453 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.842601 kubelet[3398]: W0912 17:23:25.842542 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.842601 kubelet[3398]: E0912 17:23:25.842558 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:25.842968 kubelet[3398]: E0912 17:23:25.842920 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.842968 kubelet[3398]: W0912 17:23:25.842933 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.842968 kubelet[3398]: E0912 17:23:25.842942 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:25.843529 kubelet[3398]: E0912 17:23:25.843377 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.843529 kubelet[3398]: W0912 17:23:25.843386 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.843529 kubelet[3398]: E0912 17:23:25.843395 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:25.843946 kubelet[3398]: E0912 17:23:25.843931 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.844984 kubelet[3398]: W0912 17:23:25.844916 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.845089 kubelet[3398]: E0912 17:23:25.845070 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:25.845609 kubelet[3398]: E0912 17:23:25.845505 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.845609 kubelet[3398]: W0912 17:23:25.845520 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.845609 kubelet[3398]: E0912 17:23:25.845531 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:25.846029 kubelet[3398]: E0912 17:23:25.845921 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.846029 kubelet[3398]: W0912 17:23:25.845933 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.846029 kubelet[3398]: E0912 17:23:25.845943 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:25.846619 kubelet[3398]: E0912 17:23:25.846525 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.846830 kubelet[3398]: W0912 17:23:25.846694 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.846830 kubelet[3398]: E0912 17:23:25.846713 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:25.847142 kubelet[3398]: E0912 17:23:25.847131 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.847274 kubelet[3398]: W0912 17:23:25.847215 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.847274 kubelet[3398]: E0912 17:23:25.847231 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:25.847534 kubelet[3398]: E0912 17:23:25.847459 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.847534 kubelet[3398]: W0912 17:23:25.847488 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.847534 kubelet[3398]: E0912 17:23:25.847498 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:25.847771 kubelet[3398]: E0912 17:23:25.847738 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.847771 kubelet[3398]: W0912 17:23:25.847749 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.847771 kubelet[3398]: E0912 17:23:25.847758 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:25.848060 kubelet[3398]: E0912 17:23:25.848012 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.848060 kubelet[3398]: W0912 17:23:25.848022 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.848060 kubelet[3398]: E0912 17:23:25.848030 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:25.848304 kubelet[3398]: E0912 17:23:25.848255 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.848304 kubelet[3398]: W0912 17:23:25.848265 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.848304 kubelet[3398]: E0912 17:23:25.848273 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:25.848538 kubelet[3398]: E0912 17:23:25.848508 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.848538 kubelet[3398]: W0912 17:23:25.848517 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.848538 kubelet[3398]: E0912 17:23:25.848525 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:25.864944 kubelet[3398]: E0912 17:23:25.864906 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.864944 kubelet[3398]: W0912 17:23:25.864918 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.864944 kubelet[3398]: E0912 17:23:25.864928 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:25.866724 kubelet[3398]: E0912 17:23:25.866696 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.866724 kubelet[3398]: W0912 17:23:25.866710 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.866893 kubelet[3398]: E0912 17:23:25.866820 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:25.867101 kubelet[3398]: E0912 17:23:25.867090 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.867173 kubelet[3398]: W0912 17:23:25.867163 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.867281 kubelet[3398]: E0912 17:23:25.867223 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:25.867481 kubelet[3398]: E0912 17:23:25.867458 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.867559 kubelet[3398]: W0912 17:23:25.867548 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.867662 kubelet[3398]: E0912 17:23:25.867618 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:25.867850 kubelet[3398]: E0912 17:23:25.867839 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.867926 kubelet[3398]: W0912 17:23:25.867908 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.868049 kubelet[3398]: E0912 17:23:25.868030 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:25.868209 kubelet[3398]: E0912 17:23:25.868188 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.868209 kubelet[3398]: W0912 17:23:25.868198 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.868365 kubelet[3398]: E0912 17:23:25.868345 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:25.869006 kubelet[3398]: E0912 17:23:25.868533 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.869006 kubelet[3398]: W0912 17:23:25.868545 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.869190 kubelet[3398]: E0912 17:23:25.869167 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:25.869511 kubelet[3398]: E0912 17:23:25.869357 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.869511 kubelet[3398]: W0912 17:23:25.869492 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.869701 kubelet[3398]: E0912 17:23:25.869677 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:25.869877 kubelet[3398]: E0912 17:23:25.869853 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.869877 kubelet[3398]: W0912 17:23:25.869864 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.870038 kubelet[3398]: E0912 17:23:25.870020 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:25.870184 kubelet[3398]: E0912 17:23:25.870173 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.870257 kubelet[3398]: W0912 17:23:25.870236 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.870316 kubelet[3398]: E0912 17:23:25.870293 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:25.870558 kubelet[3398]: E0912 17:23:25.870534 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.870558 kubelet[3398]: W0912 17:23:25.870545 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.870724 kubelet[3398]: E0912 17:23:25.870648 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:25.870909 kubelet[3398]: E0912 17:23:25.870886 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.870909 kubelet[3398]: W0912 17:23:25.870897 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.871334 kubelet[3398]: E0912 17:23:25.871038 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:25.871533 kubelet[3398]: E0912 17:23:25.871521 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.871610 kubelet[3398]: W0912 17:23:25.871599 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.871753 kubelet[3398]: E0912 17:23:25.871742 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:25.871999 kubelet[3398]: E0912 17:23:25.871987 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.872144 kubelet[3398]: W0912 17:23:25.872055 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.872273 kubelet[3398]: E0912 17:23:25.872246 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:25.872969 kubelet[3398]: E0912 17:23:25.872911 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.872969 kubelet[3398]: W0912 17:23:25.872952 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.873123 kubelet[3398]: E0912 17:23:25.873067 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:25.873307 kubelet[3398]: E0912 17:23:25.873286 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.873307 kubelet[3398]: W0912 17:23:25.873296 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.873436 kubelet[3398]: E0912 17:23:25.873381 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:25.873806 kubelet[3398]: E0912 17:23:25.873619 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.873806 kubelet[3398]: W0912 17:23:25.873629 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.873806 kubelet[3398]: E0912 17:23:25.873638 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:25.874055 kubelet[3398]: E0912 17:23:25.874043 3398 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:23:25.874125 kubelet[3398]: W0912 17:23:25.874114 3398 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:23:25.874282 kubelet[3398]: E0912 17:23:25.874268 3398 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:23:26.711032 containerd[1875]: time="2025-09-12T17:23:26.710982688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:26.713500 containerd[1875]: time="2025-09-12T17:23:26.713450521Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 12 17:23:26.716387 containerd[1875]: time="2025-09-12T17:23:26.716359933Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:26.722871 containerd[1875]: time="2025-09-12T17:23:26.722823748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:26.723169 containerd[1875]: time="2025-09-12T17:23:26.723086271Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.461710694s" Sep 12 17:23:26.723169 containerd[1875]: time="2025-09-12T17:23:26.723114496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 12 17:23:26.729810 containerd[1875]: time="2025-09-12T17:23:26.729774367Z" level=info msg="CreateContainer within sandbox \"e6d0eb5b49267305034c9fa532e0f81bdfd34c633cb9a7a2a689dcfe3bb068ca\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:23:26.738752 kubelet[3398]: E0912 17:23:26.738715 3398 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vj2b6" podUID="91cf3b25-a4f8-46ef-a218-a6fd5f87b47a" Sep 12 17:23:26.756964 containerd[1875]: time="2025-09-12T17:23:26.756219229Z" level=info msg="Container 51dede82c043b3e503b3204447e9b2874209e6fe4188df460a078e3434f3a643: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:23:26.773255 containerd[1875]: time="2025-09-12T17:23:26.773208133Z" level=info msg="CreateContainer within sandbox \"e6d0eb5b49267305034c9fa532e0f81bdfd34c633cb9a7a2a689dcfe3bb068ca\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"51dede82c043b3e503b3204447e9b2874209e6fe4188df460a078e3434f3a643\"" Sep 12 17:23:26.773873 containerd[1875]: time="2025-09-12T17:23:26.773783644Z" level=info msg="StartContainer for \"51dede82c043b3e503b3204447e9b2874209e6fe4188df460a078e3434f3a643\"" Sep 12 17:23:26.775818 containerd[1875]: time="2025-09-12T17:23:26.775789203Z" level=info msg="connecting to shim 51dede82c043b3e503b3204447e9b2874209e6fe4188df460a078e3434f3a643" address="unix:///run/containerd/s/58cc87a624af22830244b1526b09dba84550fd158457e0819655413e394c59a1" protocol=ttrpc version=3 Sep 12 17:23:26.797599 systemd[1]: Started cri-containerd-51dede82c043b3e503b3204447e9b2874209e6fe4188df460a078e3434f3a643.scope - libcontainer container 51dede82c043b3e503b3204447e9b2874209e6fe4188df460a078e3434f3a643. Sep 12 17:23:26.807398 kubelet[3398]: I0912 17:23:26.807371 3398 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:23:26.829897 containerd[1875]: time="2025-09-12T17:23:26.829864901Z" level=info msg="StartContainer for \"51dede82c043b3e503b3204447e9b2874209e6fe4188df460a078e3434f3a643\" returns successfully" Sep 12 17:23:26.838848 systemd[1]: cri-containerd-51dede82c043b3e503b3204447e9b2874209e6fe4188df460a078e3434f3a643.scope: Deactivated successfully. Sep 12 17:23:26.841580 containerd[1875]: time="2025-09-12T17:23:26.840608926Z" level=info msg="received exit event container_id:\"51dede82c043b3e503b3204447e9b2874209e6fe4188df460a078e3434f3a643\" id:\"51dede82c043b3e503b3204447e9b2874209e6fe4188df460a078e3434f3a643\" pid:4020 exited_at:{seconds:1757697806 nanos:840249296}" Sep 12 17:23:26.841580 containerd[1875]: time="2025-09-12T17:23:26.840842152Z" level=info msg="TaskExit event in podsandbox handler container_id:\"51dede82c043b3e503b3204447e9b2874209e6fe4188df460a078e3434f3a643\" id:\"51dede82c043b3e503b3204447e9b2874209e6fe4188df460a078e3434f3a643\" pid:4020 exited_at:{seconds:1757697806 nanos:840249296}" Sep 12 17:23:26.859733 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51dede82c043b3e503b3204447e9b2874209e6fe4188df460a078e3434f3a643-rootfs.mount: Deactivated successfully. Sep 12 17:23:28.738740 kubelet[3398]: E0912 17:23:28.738656 3398 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vj2b6" podUID="91cf3b25-a4f8-46ef-a218-a6fd5f87b47a" Sep 12 17:23:28.815744 containerd[1875]: time="2025-09-12T17:23:28.815603787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:23:30.739116 kubelet[3398]: E0912 17:23:30.738900 3398 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vj2b6" podUID="91cf3b25-a4f8-46ef-a218-a6fd5f87b47a" Sep 12 17:23:32.385036 containerd[1875]: time="2025-09-12T17:23:32.384989720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:32.388412 containerd[1875]: time="2025-09-12T17:23:32.388378281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 12 17:23:32.391441 containerd[1875]: time="2025-09-12T17:23:32.391402454Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:32.395942 containerd[1875]: time="2025-09-12T17:23:32.395610645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:32.395942 containerd[1875]: time="2025-09-12T17:23:32.395835333Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 3.580064108s" Sep 12 17:23:32.395942 containerd[1875]: time="2025-09-12T17:23:32.395859238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 12 17:23:32.398900 containerd[1875]: time="2025-09-12T17:23:32.398859818Z" level=info msg="CreateContainer within sandbox \"e6d0eb5b49267305034c9fa532e0f81bdfd34c633cb9a7a2a689dcfe3bb068ca\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:23:32.421604 containerd[1875]: time="2025-09-12T17:23:32.421580121Z" level=info msg="Container 2fe0c7ebf17222a435f43905ef6a631eb364e1951acd385dc08856386750b32f: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:23:32.439021 containerd[1875]: time="2025-09-12T17:23:32.438989258Z" level=info msg="CreateContainer within sandbox \"e6d0eb5b49267305034c9fa532e0f81bdfd34c633cb9a7a2a689dcfe3bb068ca\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2fe0c7ebf17222a435f43905ef6a631eb364e1951acd385dc08856386750b32f\"" Sep 12 17:23:32.439580 containerd[1875]: time="2025-09-12T17:23:32.439501101Z" level=info msg="StartContainer for \"2fe0c7ebf17222a435f43905ef6a631eb364e1951acd385dc08856386750b32f\"" Sep 12 17:23:32.440913 containerd[1875]: time="2025-09-12T17:23:32.440858749Z" level=info msg="connecting to shim 2fe0c7ebf17222a435f43905ef6a631eb364e1951acd385dc08856386750b32f" address="unix:///run/containerd/s/58cc87a624af22830244b1526b09dba84550fd158457e0819655413e394c59a1" protocol=ttrpc version=3 Sep 12 17:23:32.457593 systemd[1]: Started cri-containerd-2fe0c7ebf17222a435f43905ef6a631eb364e1951acd385dc08856386750b32f.scope - libcontainer container 2fe0c7ebf17222a435f43905ef6a631eb364e1951acd385dc08856386750b32f. Sep 12 17:23:32.488298 containerd[1875]: time="2025-09-12T17:23:32.488265779Z" level=info msg="StartContainer for \"2fe0c7ebf17222a435f43905ef6a631eb364e1951acd385dc08856386750b32f\" returns successfully" Sep 12 17:23:32.739564 kubelet[3398]: E0912 17:23:32.739343 3398 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vj2b6" podUID="91cf3b25-a4f8-46ef-a218-a6fd5f87b47a" Sep 12 17:23:33.665367 systemd[1]: cri-containerd-2fe0c7ebf17222a435f43905ef6a631eb364e1951acd385dc08856386750b32f.scope: Deactivated successfully. Sep 12 17:23:33.668337 systemd[1]: cri-containerd-2fe0c7ebf17222a435f43905ef6a631eb364e1951acd385dc08856386750b32f.scope: Consumed 315ms CPU time, 184.1M memory peak, 165.8M written to disk. Sep 12 17:23:33.669082 containerd[1875]: time="2025-09-12T17:23:33.669041438Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2fe0c7ebf17222a435f43905ef6a631eb364e1951acd385dc08856386750b32f\" id:\"2fe0c7ebf17222a435f43905ef6a631eb364e1951acd385dc08856386750b32f\" pid:4076 exited_at:{seconds:1757697813 nanos:668228152}" Sep 12 17:23:33.669482 containerd[1875]: time="2025-09-12T17:23:33.669377210Z" level=info msg="received exit event container_id:\"2fe0c7ebf17222a435f43905ef6a631eb364e1951acd385dc08856386750b32f\" id:\"2fe0c7ebf17222a435f43905ef6a631eb364e1951acd385dc08856386750b32f\" pid:4076 exited_at:{seconds:1757697813 nanos:668228152}" Sep 12 17:23:33.686274 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2fe0c7ebf17222a435f43905ef6a631eb364e1951acd385dc08856386750b32f-rootfs.mount: Deactivated successfully. Sep 12 17:23:33.767442 kubelet[3398]: I0912 17:23:33.767412 3398 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 17:23:34.096842 kubelet[3398]: W0912 17:23:33.826226 3398 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4426.1.0-a-9410d45923" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4426.1.0-a-9410d45923' and this object Sep 12 17:23:34.096842 kubelet[3398]: E0912 17:23:33.830617 3398 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4426.1.0-a-9410d45923\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4426.1.0-a-9410d45923' and this object" logger="UnhandledError" Sep 12 17:23:34.096842 kubelet[3398]: I0912 17:23:33.913766 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9d32a396-40df-4829-9715-f7b972ecd86c-goldmane-key-pair\") pod \"goldmane-54d579b49d-qzvb7\" (UID: \"9d32a396-40df-4829-9715-f7b972ecd86c\") " pod="calico-system/goldmane-54d579b49d-qzvb7" Sep 12 17:23:34.096842 kubelet[3398]: I0912 17:23:33.913805 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btzmf\" (UniqueName: \"kubernetes.io/projected/b6e1a7a1-e091-4016-a23d-41642e3383a2-kube-api-access-btzmf\") pod \"calico-kube-controllers-7dcbdd784d-pcpqj\" (UID: \"b6e1a7a1-e091-4016-a23d-41642e3383a2\") " pod="calico-system/calico-kube-controllers-7dcbdd784d-pcpqj" Sep 12 17:23:34.096842 kubelet[3398]: I0912 17:23:33.913818 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvf8l\" (UniqueName: \"kubernetes.io/projected/4ba03ff9-6320-42fb-94ed-e6ab8f2858f4-kube-api-access-xvf8l\") pod \"coredns-668d6bf9bc-rvwsb\" (UID: \"4ba03ff9-6320-42fb-94ed-e6ab8f2858f4\") " pod="kube-system/coredns-668d6bf9bc-rvwsb" Sep 12 17:23:33.817027 systemd[1]: Created slice kubepods-burstable-pod4ba03ff9_6320_42fb_94ed_e6ab8f2858f4.slice - libcontainer container kubepods-burstable-pod4ba03ff9_6320_42fb_94ed_e6ab8f2858f4.slice. Sep 12 17:23:34.097150 kubelet[3398]: I0912 17:23:33.913828 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/73929bc7-4919-4b77-ba79-6b869c871156-calico-apiserver-certs\") pod \"calico-apiserver-7ff6f6f7b7-m8ldc\" (UID: \"73929bc7-4919-4b77-ba79-6b869c871156\") " pod="calico-apiserver/calico-apiserver-7ff6f6f7b7-m8ldc" Sep 12 17:23:34.097150 kubelet[3398]: I0912 17:23:33.913839 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62kzs\" (UniqueName: \"kubernetes.io/projected/73929bc7-4919-4b77-ba79-6b869c871156-kube-api-access-62kzs\") pod \"calico-apiserver-7ff6f6f7b7-m8ldc\" (UID: \"73929bc7-4919-4b77-ba79-6b869c871156\") " pod="calico-apiserver/calico-apiserver-7ff6f6f7b7-m8ldc" Sep 12 17:23:34.097150 kubelet[3398]: I0912 17:23:33.913850 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdft6\" (UniqueName: \"kubernetes.io/projected/9d32a396-40df-4829-9715-f7b972ecd86c-kube-api-access-mdft6\") pod \"goldmane-54d579b49d-qzvb7\" (UID: \"9d32a396-40df-4829-9715-f7b972ecd86c\") " pod="calico-system/goldmane-54d579b49d-qzvb7" Sep 12 17:23:34.097150 kubelet[3398]: I0912 17:23:33.913869 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lfd5\" (UniqueName: \"kubernetes.io/projected/993d4da9-2993-4f01-b49e-29cd40727590-kube-api-access-9lfd5\") pod \"calico-apiserver-7ff6f6f7b7-fffcq\" (UID: \"993d4da9-2993-4f01-b49e-29cd40727590\") " pod="calico-apiserver/calico-apiserver-7ff6f6f7b7-fffcq" Sep 12 17:23:34.097150 kubelet[3398]: I0912 17:23:33.913878 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aa40ba9-fdce-4e43-989e-6ae77304a9da-config-volume\") pod \"coredns-668d6bf9bc-k8z9b\" (UID: \"7aa40ba9-fdce-4e43-989e-6ae77304a9da\") " pod="kube-system/coredns-668d6bf9bc-k8z9b" Sep 12 17:23:33.829522 systemd[1]: Created slice kubepods-besteffort-pod73929bc7_4919_4b77_ba79_6b869c871156.slice - libcontainer container kubepods-besteffort-pod73929bc7_4919_4b77_ba79_6b869c871156.slice. Sep 12 17:23:34.097265 kubelet[3398]: I0912 17:23:33.913890 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ba03ff9-6320-42fb-94ed-e6ab8f2858f4-config-volume\") pod \"coredns-668d6bf9bc-rvwsb\" (UID: \"4ba03ff9-6320-42fb-94ed-e6ab8f2858f4\") " pod="kube-system/coredns-668d6bf9bc-rvwsb" Sep 12 17:23:34.097265 kubelet[3398]: I0912 17:23:33.913899 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/650631bb-8718-4683-8b4b-952296ecec5b-whisker-ca-bundle\") pod \"whisker-7b78b5b6c5-g2jmd\" (UID: \"650631bb-8718-4683-8b4b-952296ecec5b\") " pod="calico-system/whisker-7b78b5b6c5-g2jmd" Sep 12 17:23:34.097265 kubelet[3398]: I0912 17:23:33.913920 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhbg8\" (UniqueName: \"kubernetes.io/projected/7aa40ba9-fdce-4e43-989e-6ae77304a9da-kube-api-access-rhbg8\") pod \"coredns-668d6bf9bc-k8z9b\" (UID: \"7aa40ba9-fdce-4e43-989e-6ae77304a9da\") " pod="kube-system/coredns-668d6bf9bc-k8z9b" Sep 12 17:23:34.097265 kubelet[3398]: I0912 17:23:33.913932 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6e1a7a1-e091-4016-a23d-41642e3383a2-tigera-ca-bundle\") pod \"calico-kube-controllers-7dcbdd784d-pcpqj\" (UID: \"b6e1a7a1-e091-4016-a23d-41642e3383a2\") " pod="calico-system/calico-kube-controllers-7dcbdd784d-pcpqj" Sep 12 17:23:34.097265 kubelet[3398]: I0912 17:23:33.913944 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/650631bb-8718-4683-8b4b-952296ecec5b-whisker-backend-key-pair\") pod \"whisker-7b78b5b6c5-g2jmd\" (UID: \"650631bb-8718-4683-8b4b-952296ecec5b\") " pod="calico-system/whisker-7b78b5b6c5-g2jmd" Sep 12 17:23:33.840508 systemd[1]: Created slice kubepods-burstable-pod7aa40ba9_fdce_4e43_989e_6ae77304a9da.slice - libcontainer container kubepods-burstable-pod7aa40ba9_fdce_4e43_989e_6ae77304a9da.slice. Sep 12 17:23:34.097891 kubelet[3398]: I0912 17:23:33.913954 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d32a396-40df-4829-9715-f7b972ecd86c-config\") pod \"goldmane-54d579b49d-qzvb7\" (UID: \"9d32a396-40df-4829-9715-f7b972ecd86c\") " pod="calico-system/goldmane-54d579b49d-qzvb7" Sep 12 17:23:34.097891 kubelet[3398]: I0912 17:23:33.913975 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/993d4da9-2993-4f01-b49e-29cd40727590-calico-apiserver-certs\") pod \"calico-apiserver-7ff6f6f7b7-fffcq\" (UID: \"993d4da9-2993-4f01-b49e-29cd40727590\") " pod="calico-apiserver/calico-apiserver-7ff6f6f7b7-fffcq" Sep 12 17:23:34.097891 kubelet[3398]: I0912 17:23:33.913984 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d32a396-40df-4829-9715-f7b972ecd86c-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-qzvb7\" (UID: \"9d32a396-40df-4829-9715-f7b972ecd86c\") " pod="calico-system/goldmane-54d579b49d-qzvb7" Sep 12 17:23:34.097891 kubelet[3398]: I0912 17:23:33.913993 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fkkn\" (UniqueName: \"kubernetes.io/projected/650631bb-8718-4683-8b4b-952296ecec5b-kube-api-access-2fkkn\") pod \"whisker-7b78b5b6c5-g2jmd\" (UID: \"650631bb-8718-4683-8b4b-952296ecec5b\") " pod="calico-system/whisker-7b78b5b6c5-g2jmd" Sep 12 17:23:33.853623 systemd[1]: Created slice kubepods-besteffort-pod9d32a396_40df_4829_9715_f7b972ecd86c.slice - libcontainer container kubepods-besteffort-pod9d32a396_40df_4829_9715_f7b972ecd86c.slice. Sep 12 17:23:33.858473 systemd[1]: Created slice kubepods-besteffort-pod993d4da9_2993_4f01_b49e_29cd40727590.slice - libcontainer container kubepods-besteffort-pod993d4da9_2993_4f01_b49e_29cd40727590.slice. Sep 12 17:23:33.862173 systemd[1]: Created slice kubepods-besteffort-pod650631bb_8718_4683_8b4b_952296ecec5b.slice - libcontainer container kubepods-besteffort-pod650631bb_8718_4683_8b4b_952296ecec5b.slice. Sep 12 17:23:33.866976 systemd[1]: Created slice kubepods-besteffort-podb6e1a7a1_e091_4016_a23d_41642e3383a2.slice - libcontainer container kubepods-besteffort-podb6e1a7a1_e091_4016_a23d_41642e3383a2.slice. Sep 12 17:23:34.399055 containerd[1875]: time="2025-09-12T17:23:34.398369418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rvwsb,Uid:4ba03ff9-6320-42fb-94ed-e6ab8f2858f4,Namespace:kube-system,Attempt:0,}" Sep 12 17:23:34.406046 containerd[1875]: time="2025-09-12T17:23:34.406016357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b78b5b6c5-g2jmd,Uid:650631bb-8718-4683-8b4b-952296ecec5b,Namespace:calico-system,Attempt:0,}" Sep 12 17:23:34.491495 containerd[1875]: time="2025-09-12T17:23:34.491280922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k8z9b,Uid:7aa40ba9-fdce-4e43-989e-6ae77304a9da,Namespace:kube-system,Attempt:0,}" Sep 12 17:23:34.491629 containerd[1875]: time="2025-09-12T17:23:34.491508090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-qzvb7,Uid:9d32a396-40df-4829-9715-f7b972ecd86c,Namespace:calico-system,Attempt:0,}" Sep 12 17:23:34.491629 containerd[1875]: time="2025-09-12T17:23:34.491568596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dcbdd784d-pcpqj,Uid:b6e1a7a1-e091-4016-a23d-41642e3383a2,Namespace:calico-system,Attempt:0,}" Sep 12 17:23:34.638071 containerd[1875]: time="2025-09-12T17:23:34.637974356Z" level=error msg="Failed to destroy network for sandbox \"a22e74a12c96045df98171cbc852454bb7802e4a8530fe80c8244241d5f9ba3b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:23:34.642563 containerd[1875]: time="2025-09-12T17:23:34.642492742Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rvwsb,Uid:4ba03ff9-6320-42fb-94ed-e6ab8f2858f4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a22e74a12c96045df98171cbc852454bb7802e4a8530fe80c8244241d5f9ba3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:23:34.644144 kubelet[3398]: E0912 17:23:34.642707 3398 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a22e74a12c96045df98171cbc852454bb7802e4a8530fe80c8244241d5f9ba3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:23:34.644144 kubelet[3398]: E0912 17:23:34.642770 3398 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a22e74a12c96045df98171cbc852454bb7802e4a8530fe80c8244241d5f9ba3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rvwsb" Sep 12 17:23:34.644144 kubelet[3398]: E0912 17:23:34.642786 3398 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a22e74a12c96045df98171cbc852454bb7802e4a8530fe80c8244241d5f9ba3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rvwsb" Sep 12 17:23:34.644235 kubelet[3398]: E0912 17:23:34.642826 3398 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rvwsb_kube-system(4ba03ff9-6320-42fb-94ed-e6ab8f2858f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rvwsb_kube-system(4ba03ff9-6320-42fb-94ed-e6ab8f2858f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a22e74a12c96045df98171cbc852454bb7802e4a8530fe80c8244241d5f9ba3b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rvwsb" podUID="4ba03ff9-6320-42fb-94ed-e6ab8f2858f4" Sep 12 17:23:34.653770 containerd[1875]: time="2025-09-12T17:23:34.653614213Z" level=error msg="Failed to destroy network for sandbox \"afbf14107d2b79ce14457a31a771d05c5843bb080c83f50b7e222afd649a148b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:23:34.657245 containerd[1875]: time="2025-09-12T17:23:34.657123787Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b78b5b6c5-g2jmd,Uid:650631bb-8718-4683-8b4b-952296ecec5b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"afbf14107d2b79ce14457a31a771d05c5843bb080c83f50b7e222afd649a148b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:23:34.657451 kubelet[3398]: E0912 17:23:34.657423 3398 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afbf14107d2b79ce14457a31a771d05c5843bb080c83f50b7e222afd649a148b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:23:34.657708 kubelet[3398]: E0912 17:23:34.657552 3398 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afbf14107d2b79ce14457a31a771d05c5843bb080c83f50b7e222afd649a148b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b78b5b6c5-g2jmd" Sep 12 17:23:34.657708 kubelet[3398]: E0912 17:23:34.657574 3398 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afbf14107d2b79ce14457a31a771d05c5843bb080c83f50b7e222afd649a148b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b78b5b6c5-g2jmd" Sep 12 17:23:34.657708 kubelet[3398]: E0912 17:23:34.657609 3398 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7b78b5b6c5-g2jmd_calico-system(650631bb-8718-4683-8b4b-952296ecec5b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7b78b5b6c5-g2jmd_calico-system(650631bb-8718-4683-8b4b-952296ecec5b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"afbf14107d2b79ce14457a31a771d05c5843bb080c83f50b7e222afd649a148b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b78b5b6c5-g2jmd" podUID="650631bb-8718-4683-8b4b-952296ecec5b" Sep 12 17:23:34.660293 containerd[1875]: time="2025-09-12T17:23:34.660269324Z" level=error msg="Failed to destroy network for sandbox \"4f05b579a1e35ecd42b61512bba90b50db0599f87ce7f4fb0b0ada0d9202c3da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:23:34.663872 containerd[1875]: time="2025-09-12T17:23:34.663843237Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k8z9b,Uid:7aa40ba9-fdce-4e43-989e-6ae77304a9da,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f05b579a1e35ecd42b61512bba90b50db0599f87ce7f4fb0b0ada0d9202c3da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:23:34.664103 kubelet[3398]: E0912 17:23:34.664081 3398 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f05b579a1e35ecd42b61512bba90b50db0599f87ce7f4fb0b0ada0d9202c3da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:23:34.664215 kubelet[3398]: E0912 17:23:34.664199 3398 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f05b579a1e35ecd42b61512bba90b50db0599f87ce7f4fb0b0ada0d9202c3da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k8z9b" Sep 12 17:23:34.664282 kubelet[3398]: E0912 17:23:34.664265 3398 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f05b579a1e35ecd42b61512bba90b50db0599f87ce7f4fb0b0ada0d9202c3da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k8z9b" Sep 12 17:23:34.664363 kubelet[3398]: E0912 17:23:34.664346 3398 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-k8z9b_kube-system(7aa40ba9-fdce-4e43-989e-6ae77304a9da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-k8z9b_kube-system(7aa40ba9-fdce-4e43-989e-6ae77304a9da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f05b579a1e35ecd42b61512bba90b50db0599f87ce7f4fb0b0ada0d9202c3da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-k8z9b" podUID="7aa40ba9-fdce-4e43-989e-6ae77304a9da" Sep 12 17:23:34.673555 containerd[1875]: time="2025-09-12T17:23:34.673486167Z" level=error msg="Failed to destroy network for sandbox \"2272979e410a37521d7a8bf714f5941f30e39969a82420e01bac4cd487ed9030\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:23:34.677567 containerd[1875]: time="2025-09-12T17:23:34.677456685Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-qzvb7,Uid:9d32a396-40df-4829-9715-f7b972ecd86c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2272979e410a37521d7a8bf714f5941f30e39969a82420e01bac4cd487ed9030\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:23:34.677911 kubelet[3398]: E0912 17:23:34.677811 3398 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2272979e410a37521d7a8bf714f5941f30e39969a82420e01bac4cd487ed9030\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:23:34.677911 kubelet[3398]: E0912 17:23:34.677850 3398 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2272979e410a37521d7a8bf714f5941f30e39969a82420e01bac4cd487ed9030\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-qzvb7" Sep 12 17:23:34.677911 kubelet[3398]: E0912 17:23:34.677866 3398 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2272979e410a37521d7a8bf714f5941f30e39969a82420e01bac4cd487ed9030\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-qzvb7" Sep 12 17:23:34.678219 containerd[1875]: time="2025-09-12T17:23:34.678083756Z" level=error msg="Failed to destroy network for sandbox \"827c7e95e9f59bc20da74b23065b415ee8c02b6b31fc5578010bd7e271fa51f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:23:34.678275 kubelet[3398]: E0912 17:23:34.678164 3398 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-qzvb7_calico-system(9d32a396-40df-4829-9715-f7b972ecd86c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-qzvb7_calico-system(9d32a396-40df-4829-9715-f7b972ecd86c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2272979e410a37521d7a8bf714f5941f30e39969a82420e01bac4cd487ed9030\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-qzvb7" podUID="9d32a396-40df-4829-9715-f7b972ecd86c" Sep 12 17:23:34.681969 containerd[1875]: time="2025-09-12T17:23:34.681888636Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dcbdd784d-pcpqj,Uid:b6e1a7a1-e091-4016-a23d-41642e3383a2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"827c7e95e9f59bc20da74b23065b415ee8c02b6b31fc5578010bd7e271fa51f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:23:34.682056 kubelet[3398]: E0912 17:23:34.682032 3398 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"827c7e95e9f59bc20da74b23065b415ee8c02b6b31fc5578010bd7e271fa51f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:23:34.682083 kubelet[3398]: E0912 17:23:34.682071 3398 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"827c7e95e9f59bc20da74b23065b415ee8c02b6b31fc5578010bd7e271fa51f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7dcbdd784d-pcpqj" Sep 12 17:23:34.682105 kubelet[3398]: E0912 17:23:34.682088 3398 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"827c7e95e9f59bc20da74b23065b415ee8c02b6b31fc5578010bd7e271fa51f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7dcbdd784d-pcpqj" Sep 12 17:23:34.682157 kubelet[3398]: E0912 17:23:34.682110 3398 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7dcbdd784d-pcpqj_calico-system(b6e1a7a1-e091-4016-a23d-41642e3383a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7dcbdd784d-pcpqj_calico-system(b6e1a7a1-e091-4016-a23d-41642e3383a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"827c7e95e9f59bc20da74b23065b415ee8c02b6b31fc5578010bd7e271fa51f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7dcbdd784d-pcpqj" podUID="b6e1a7a1-e091-4016-a23d-41642e3383a2" Sep 12 17:23:34.742793 systemd[1]: Created slice kubepods-besteffort-pod91cf3b25_a4f8_46ef_a218_a6fd5f87b47a.slice - libcontainer container kubepods-besteffort-pod91cf3b25_a4f8_46ef_a218_a6fd5f87b47a.slice. Sep 12 17:23:34.744849 containerd[1875]: time="2025-09-12T17:23:34.744815215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vj2b6,Uid:91cf3b25-a4f8-46ef-a218-a6fd5f87b47a,Namespace:calico-system,Attempt:0,}" Sep 12 17:23:34.780044 containerd[1875]: time="2025-09-12T17:23:34.779897803Z" level=error msg="Failed to destroy network for sandbox \"92401e7a9c8d1b93c0d64ec99cdb21dc67c94757e494495560de3ffd2f42456b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:23:34.782013 systemd[1]: run-netns-cni\x2dbb36c2ea\x2da062\x2dc541\x2d863e\x2d27fd3f5df648.mount: Deactivated successfully. Sep 12 17:23:34.784217 containerd[1875]: time="2025-09-12T17:23:34.784112442Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vj2b6,Uid:91cf3b25-a4f8-46ef-a218-a6fd5f87b47a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"92401e7a9c8d1b93c0d64ec99cdb21dc67c94757e494495560de3ffd2f42456b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:23:34.784382 kubelet[3398]: E0912 17:23:34.784340 3398 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92401e7a9c8d1b93c0d64ec99cdb21dc67c94757e494495560de3ffd2f42456b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:23:34.785089 kubelet[3398]: E0912 17:23:34.784397 3398 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92401e7a9c8d1b93c0d64ec99cdb21dc67c94757e494495560de3ffd2f42456b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vj2b6" Sep 12 17:23:34.785089 kubelet[3398]: E0912 17:23:34.784417 3398 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92401e7a9c8d1b93c0d64ec99cdb21dc67c94757e494495560de3ffd2f42456b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vj2b6" Sep 12 17:23:34.785089 kubelet[3398]: E0912 17:23:34.784453 3398 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vj2b6_calico-system(91cf3b25-a4f8-46ef-a218-a6fd5f87b47a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vj2b6_calico-system(91cf3b25-a4f8-46ef-a218-a6fd5f87b47a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92401e7a9c8d1b93c0d64ec99cdb21dc67c94757e494495560de3ffd2f42456b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vj2b6" podUID="91cf3b25-a4f8-46ef-a218-a6fd5f87b47a" Sep 12 17:23:34.836286 containerd[1875]: time="2025-09-12T17:23:34.836252146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:23:35.017232 kubelet[3398]: E0912 17:23:35.016921 3398 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Sep 12 17:23:35.017232 kubelet[3398]: E0912 17:23:35.017020 3398 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73929bc7-4919-4b77-ba79-6b869c871156-calico-apiserver-certs podName:73929bc7-4919-4b77-ba79-6b869c871156 nodeName:}" failed. No retries permitted until 2025-09-12 17:23:35.517000898 +0000 UTC m=+31.937918746 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/73929bc7-4919-4b77-ba79-6b869c871156-calico-apiserver-certs") pod "calico-apiserver-7ff6f6f7b7-m8ldc" (UID: "73929bc7-4919-4b77-ba79-6b869c871156") : failed to sync secret cache: timed out waiting for the condition Sep 12 17:23:35.018010 kubelet[3398]: E0912 17:23:35.017514 3398 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Sep 12 17:23:35.018010 kubelet[3398]: E0912 17:23:35.017571 3398 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/993d4da9-2993-4f01-b49e-29cd40727590-calico-apiserver-certs podName:993d4da9-2993-4f01-b49e-29cd40727590 nodeName:}" failed. No retries permitted until 2025-09-12 17:23:35.517543502 +0000 UTC m=+31.938461342 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/993d4da9-2993-4f01-b49e-29cd40727590-calico-apiserver-certs") pod "calico-apiserver-7ff6f6f7b7-fffcq" (UID: "993d4da9-2993-4f01-b49e-29cd40727590") : failed to sync secret cache: timed out waiting for the condition Sep 12 17:23:35.599874 containerd[1875]: time="2025-09-12T17:23:35.599763674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ff6f6f7b7-fffcq,Uid:993d4da9-2993-4f01-b49e-29cd40727590,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:23:35.645188 containerd[1875]: time="2025-09-12T17:23:35.645142583Z" level=error msg="Failed to destroy network for sandbox \"be3e57b323270cc3332728923988657ada76752274b391683cc8a24f41884a2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:23:35.648738 containerd[1875]: time="2025-09-12T17:23:35.648703662Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ff6f6f7b7-fffcq,Uid:993d4da9-2993-4f01-b49e-29cd40727590,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"be3e57b323270cc3332728923988657ada76752274b391683cc8a24f41884a2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:23:35.648969 kubelet[3398]: E0912 17:23:35.648920 3398 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be3e57b323270cc3332728923988657ada76752274b391683cc8a24f41884a2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:23:35.649044 kubelet[3398]: E0912 17:23:35.648984 3398 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be3e57b323270cc3332728923988657ada76752274b391683cc8a24f41884a2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7ff6f6f7b7-fffcq" Sep 12 17:23:35.649044 kubelet[3398]: E0912 17:23:35.649005 3398 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be3e57b323270cc3332728923988657ada76752274b391683cc8a24f41884a2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7ff6f6f7b7-fffcq" Sep 12 17:23:35.649106 kubelet[3398]: E0912 17:23:35.649043 3398 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7ff6f6f7b7-fffcq_calico-apiserver(993d4da9-2993-4f01-b49e-29cd40727590)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7ff6f6f7b7-fffcq_calico-apiserver(993d4da9-2993-4f01-b49e-29cd40727590)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be3e57b323270cc3332728923988657ada76752274b391683cc8a24f41884a2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7ff6f6f7b7-fffcq" podUID="993d4da9-2993-4f01-b49e-29cd40727590" Sep 12 17:23:35.686660 systemd[1]: run-netns-cni\x2da0e085d7\x2dafec\x2dd71b\x2d90bf\x2db78452e0f46d.mount: Deactivated successfully. Sep 12 17:23:35.699027 containerd[1875]: time="2025-09-12T17:23:35.698971835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ff6f6f7b7-m8ldc,Uid:73929bc7-4919-4b77-ba79-6b869c871156,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:23:35.756594 containerd[1875]: time="2025-09-12T17:23:35.756011034Z" level=error msg="Failed to destroy network for sandbox \"41dcb497d48633050b26ad7b8d84e5c7e1ac817b3aa06e3c14445694adc1addc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:23:35.757562 systemd[1]: run-netns-cni\x2d3e3993ae\x2daaf8\x2d912b\x2d3cc6\x2d48f8b495419a.mount: Deactivated successfully. Sep 12 17:23:35.761200 containerd[1875]: time="2025-09-12T17:23:35.761167932Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ff6f6f7b7-m8ldc,Uid:73929bc7-4919-4b77-ba79-6b869c871156,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"41dcb497d48633050b26ad7b8d84e5c7e1ac817b3aa06e3c14445694adc1addc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:23:35.761588 kubelet[3398]: E0912 17:23:35.761494 3398 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41dcb497d48633050b26ad7b8d84e5c7e1ac817b3aa06e3c14445694adc1addc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:23:35.761588 kubelet[3398]: E0912 17:23:35.761541 3398 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41dcb497d48633050b26ad7b8d84e5c7e1ac817b3aa06e3c14445694adc1addc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7ff6f6f7b7-m8ldc" Sep 12 17:23:35.761588 kubelet[3398]: E0912 17:23:35.761561 3398 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41dcb497d48633050b26ad7b8d84e5c7e1ac817b3aa06e3c14445694adc1addc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7ff6f6f7b7-m8ldc" Sep 12 17:23:35.761745 kubelet[3398]: E0912 17:23:35.761724 3398 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7ff6f6f7b7-m8ldc_calico-apiserver(73929bc7-4919-4b77-ba79-6b869c871156)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7ff6f6f7b7-m8ldc_calico-apiserver(73929bc7-4919-4b77-ba79-6b869c871156)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41dcb497d48633050b26ad7b8d84e5c7e1ac817b3aa06e3c14445694adc1addc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7ff6f6f7b7-m8ldc" podUID="73929bc7-4919-4b77-ba79-6b869c871156" Sep 12 17:23:40.716698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount610266402.mount: Deactivated successfully. Sep 12 17:23:41.310369 containerd[1875]: time="2025-09-12T17:23:41.310320374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:41.312941 containerd[1875]: time="2025-09-12T17:23:41.312821154Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 12 17:23:41.315579 containerd[1875]: time="2025-09-12T17:23:41.315556478Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:41.320430 containerd[1875]: time="2025-09-12T17:23:41.319887836Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:41.320430 containerd[1875]: time="2025-09-12T17:23:41.320198976Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 6.483914076s" Sep 12 17:23:41.320430 containerd[1875]: time="2025-09-12T17:23:41.320217448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 12 17:23:41.333956 containerd[1875]: time="2025-09-12T17:23:41.333933294Z" level=info msg="CreateContainer within sandbox \"e6d0eb5b49267305034c9fa532e0f81bdfd34c633cb9a7a2a689dcfe3bb068ca\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:23:41.357374 containerd[1875]: time="2025-09-12T17:23:41.357350767Z" level=info msg="Container f02e13a6b5c334c95bd544250fb66979f96d32bff376192c35e9982ae47c57f8: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:23:41.376707 containerd[1875]: time="2025-09-12T17:23:41.376681763Z" level=info msg="CreateContainer within sandbox \"e6d0eb5b49267305034c9fa532e0f81bdfd34c633cb9a7a2a689dcfe3bb068ca\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f02e13a6b5c334c95bd544250fb66979f96d32bff376192c35e9982ae47c57f8\"" Sep 12 17:23:41.377782 containerd[1875]: time="2025-09-12T17:23:41.377765474Z" level=info msg="StartContainer for \"f02e13a6b5c334c95bd544250fb66979f96d32bff376192c35e9982ae47c57f8\"" Sep 12 17:23:41.379292 containerd[1875]: time="2025-09-12T17:23:41.379248417Z" level=info msg="connecting to shim f02e13a6b5c334c95bd544250fb66979f96d32bff376192c35e9982ae47c57f8" address="unix:///run/containerd/s/58cc87a624af22830244b1526b09dba84550fd158457e0819655413e394c59a1" protocol=ttrpc version=3 Sep 12 17:23:41.397591 systemd[1]: Started cri-containerd-f02e13a6b5c334c95bd544250fb66979f96d32bff376192c35e9982ae47c57f8.scope - libcontainer container f02e13a6b5c334c95bd544250fb66979f96d32bff376192c35e9982ae47c57f8. Sep 12 17:23:41.429734 containerd[1875]: time="2025-09-12T17:23:41.429530433Z" level=info msg="StartContainer for \"f02e13a6b5c334c95bd544250fb66979f96d32bff376192c35e9982ae47c57f8\" returns successfully" Sep 12 17:23:41.781388 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:23:41.781518 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:23:41.881621 kubelet[3398]: I0912 17:23:41.881562 3398 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5sf6x" podStartSLOduration=1.722852279 podStartE2EDuration="19.881548511s" podCreationTimestamp="2025-09-12 17:23:22 +0000 UTC" firstStartedPulling="2025-09-12 17:23:23.162372951 +0000 UTC m=+19.583290799" lastFinishedPulling="2025-09-12 17:23:41.321069191 +0000 UTC m=+37.741987031" observedRunningTime="2025-09-12 17:23:41.880958578 +0000 UTC m=+38.301876418" watchObservedRunningTime="2025-09-12 17:23:41.881548511 +0000 UTC m=+38.302466359" Sep 12 17:23:41.962946 kubelet[3398]: I0912 17:23:41.962820 3398 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fkkn\" (UniqueName: \"kubernetes.io/projected/650631bb-8718-4683-8b4b-952296ecec5b-kube-api-access-2fkkn\") pod \"650631bb-8718-4683-8b4b-952296ecec5b\" (UID: \"650631bb-8718-4683-8b4b-952296ecec5b\") " Sep 12 17:23:41.962946 kubelet[3398]: I0912 17:23:41.962861 3398 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/650631bb-8718-4683-8b4b-952296ecec5b-whisker-ca-bundle\") pod \"650631bb-8718-4683-8b4b-952296ecec5b\" (UID: \"650631bb-8718-4683-8b4b-952296ecec5b\") " Sep 12 17:23:41.962946 kubelet[3398]: I0912 17:23:41.962874 3398 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/650631bb-8718-4683-8b4b-952296ecec5b-whisker-backend-key-pair\") pod \"650631bb-8718-4683-8b4b-952296ecec5b\" (UID: \"650631bb-8718-4683-8b4b-952296ecec5b\") " Sep 12 17:23:41.968789 kubelet[3398]: I0912 17:23:41.968593 3398 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/650631bb-8718-4683-8b4b-952296ecec5b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "650631bb-8718-4683-8b4b-952296ecec5b" (UID: "650631bb-8718-4683-8b4b-952296ecec5b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:23:41.970769 systemd[1]: var-lib-kubelet-pods-650631bb\x2d8718\x2d4683\x2d8b4b\x2d952296ecec5b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:23:41.972027 kubelet[3398]: I0912 17:23:41.971996 3398 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650631bb-8718-4683-8b4b-952296ecec5b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "650631bb-8718-4683-8b4b-952296ecec5b" (UID: "650631bb-8718-4683-8b4b-952296ecec5b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:23:41.973272 systemd[1]: var-lib-kubelet-pods-650631bb\x2d8718\x2d4683\x2d8b4b\x2d952296ecec5b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2fkkn.mount: Deactivated successfully. Sep 12 17:23:41.975712 kubelet[3398]: I0912 17:23:41.975674 3398 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/650631bb-8718-4683-8b4b-952296ecec5b-kube-api-access-2fkkn" (OuterVolumeSpecName: "kube-api-access-2fkkn") pod "650631bb-8718-4683-8b4b-952296ecec5b" (UID: "650631bb-8718-4683-8b4b-952296ecec5b"). InnerVolumeSpecName "kube-api-access-2fkkn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:23:42.007822 containerd[1875]: time="2025-09-12T17:23:42.007788291Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f02e13a6b5c334c95bd544250fb66979f96d32bff376192c35e9982ae47c57f8\" id:\"cf9c6a0ec7ee08d4042b929120bc08c5bfb78621cb4e32fe9a9c195ebdd4a810\" pid:4401 exit_status:1 exited_at:{seconds:1757697822 nanos:7113643}" Sep 12 17:23:42.063861 kubelet[3398]: I0912 17:23:42.063821 3398 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2fkkn\" (UniqueName: \"kubernetes.io/projected/650631bb-8718-4683-8b4b-952296ecec5b-kube-api-access-2fkkn\") on node \"ci-4426.1.0-a-9410d45923\" DevicePath \"\"" Sep 12 17:23:42.063861 kubelet[3398]: I0912 17:23:42.063853 3398 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/650631bb-8718-4683-8b4b-952296ecec5b-whisker-ca-bundle\") on node \"ci-4426.1.0-a-9410d45923\" DevicePath \"\"" Sep 12 17:23:42.063861 kubelet[3398]: I0912 17:23:42.063861 3398 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/650631bb-8718-4683-8b4b-952296ecec5b-whisker-backend-key-pair\") on node \"ci-4426.1.0-a-9410d45923\" DevicePath \"\"" Sep 12 17:23:42.861392 systemd[1]: Removed slice kubepods-besteffort-pod650631bb_8718_4683_8b4b_952296ecec5b.slice - libcontainer container kubepods-besteffort-pod650631bb_8718_4683_8b4b_952296ecec5b.slice. Sep 12 17:23:42.947165 systemd[1]: Created slice kubepods-besteffort-pod01b017e2_f391_4097_ae71_f6831f6abc6e.slice - libcontainer container kubepods-besteffort-pod01b017e2_f391_4097_ae71_f6831f6abc6e.slice. Sep 12 17:23:42.961558 containerd[1875]: time="2025-09-12T17:23:42.961518780Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f02e13a6b5c334c95bd544250fb66979f96d32bff376192c35e9982ae47c57f8\" id:\"29f8adeb754af78257549d33e13851bc96a4c3f331851d95e7295918a40e0d94\" pid:4444 exit_status:1 exited_at:{seconds:1757697822 nanos:961090660}" Sep 12 17:23:42.969847 kubelet[3398]: I0912 17:23:42.969766 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01b017e2-f391-4097-ae71-f6831f6abc6e-whisker-ca-bundle\") pod \"whisker-574b9c8d6-x25x9\" (UID: \"01b017e2-f391-4097-ae71-f6831f6abc6e\") " pod="calico-system/whisker-574b9c8d6-x25x9" Sep 12 17:23:42.969847 kubelet[3398]: I0912 17:23:42.969810 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/01b017e2-f391-4097-ae71-f6831f6abc6e-whisker-backend-key-pair\") pod \"whisker-574b9c8d6-x25x9\" (UID: \"01b017e2-f391-4097-ae71-f6831f6abc6e\") " pod="calico-system/whisker-574b9c8d6-x25x9" Sep 12 17:23:42.969847 kubelet[3398]: I0912 17:23:42.969821 3398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt4jp\" (UniqueName: \"kubernetes.io/projected/01b017e2-f391-4097-ae71-f6831f6abc6e-kube-api-access-rt4jp\") pod \"whisker-574b9c8d6-x25x9\" (UID: \"01b017e2-f391-4097-ae71-f6831f6abc6e\") " pod="calico-system/whisker-574b9c8d6-x25x9" Sep 12 17:23:43.250682 containerd[1875]: time="2025-09-12T17:23:43.250458218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-574b9c8d6-x25x9,Uid:01b017e2-f391-4097-ae71-f6831f6abc6e,Namespace:calico-system,Attempt:0,}" Sep 12 17:23:43.526893 systemd-networkd[1679]: cali2267dfae6b1: Link UP Sep 12 17:23:43.527512 systemd-networkd[1679]: cali2267dfae6b1: Gained carrier Sep 12 17:23:43.546537 containerd[1875]: 2025-09-12 17:23:43.407 [INFO][4550] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:23:43.546537 containerd[1875]: 2025-09-12 17:23:43.447 [INFO][4550] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4426.1.0--a--9410d45923-k8s-whisker--574b9c8d6--x25x9-eth0 whisker-574b9c8d6- calico-system 01b017e2-f391-4097-ae71-f6831f6abc6e 897 0 2025-09-12 17:23:42 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:574b9c8d6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4426.1.0-a-9410d45923 whisker-574b9c8d6-x25x9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2267dfae6b1 [] [] }} ContainerID="b185e6133c99bac0be693f2f34ab46455874d19f5688dfa5cc25be649bc1d216" Namespace="calico-system" Pod="whisker-574b9c8d6-x25x9" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-whisker--574b9c8d6--x25x9-" Sep 12 17:23:43.546537 containerd[1875]: 2025-09-12 17:23:43.447 [INFO][4550] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b185e6133c99bac0be693f2f34ab46455874d19f5688dfa5cc25be649bc1d216" Namespace="calico-system" Pod="whisker-574b9c8d6-x25x9" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-whisker--574b9c8d6--x25x9-eth0" Sep 12 17:23:43.546537 containerd[1875]: 2025-09-12 17:23:43.465 [INFO][4563] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b185e6133c99bac0be693f2f34ab46455874d19f5688dfa5cc25be649bc1d216" HandleID="k8s-pod-network.b185e6133c99bac0be693f2f34ab46455874d19f5688dfa5cc25be649bc1d216" Workload="ci--4426.1.0--a--9410d45923-k8s-whisker--574b9c8d6--x25x9-eth0" Sep 12 17:23:43.546974 containerd[1875]: 2025-09-12 17:23:43.465 [INFO][4563] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b185e6133c99bac0be693f2f34ab46455874d19f5688dfa5cc25be649bc1d216" HandleID="k8s-pod-network.b185e6133c99bac0be693f2f34ab46455874d19f5688dfa5cc25be649bc1d216" Workload="ci--4426.1.0--a--9410d45923-k8s-whisker--574b9c8d6--x25x9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afa0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4426.1.0-a-9410d45923", "pod":"whisker-574b9c8d6-x25x9", "timestamp":"2025-09-12 17:23:43.465838684 +0000 UTC"}, Hostname:"ci-4426.1.0-a-9410d45923", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:23:43.546974 containerd[1875]: 2025-09-12 17:23:43.465 [INFO][4563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:23:43.546974 containerd[1875]: 2025-09-12 17:23:43.466 [INFO][4563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:23:43.546974 containerd[1875]: 2025-09-12 17:23:43.466 [INFO][4563] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4426.1.0-a-9410d45923' Sep 12 17:23:43.546974 containerd[1875]: 2025-09-12 17:23:43.471 [INFO][4563] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b185e6133c99bac0be693f2f34ab46455874d19f5688dfa5cc25be649bc1d216" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:43.546974 containerd[1875]: 2025-09-12 17:23:43.474 [INFO][4563] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:43.546974 containerd[1875]: 2025-09-12 17:23:43.478 [INFO][4563] ipam/ipam.go 511: Trying affinity for 192.168.108.64/26 host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:43.546974 containerd[1875]: 2025-09-12 17:23:43.479 [INFO][4563] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.64/26 host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:43.546974 containerd[1875]: 2025-09-12 17:23:43.481 [INFO][4563] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.64/26 host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:43.547352 containerd[1875]: 2025-09-12 17:23:43.481 [INFO][4563] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.108.64/26 handle="k8s-pod-network.b185e6133c99bac0be693f2f34ab46455874d19f5688dfa5cc25be649bc1d216" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:43.547352 containerd[1875]: 2025-09-12 17:23:43.482 [INFO][4563] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b185e6133c99bac0be693f2f34ab46455874d19f5688dfa5cc25be649bc1d216 Sep 12 17:23:43.547352 containerd[1875]: 2025-09-12 17:23:43.486 [INFO][4563] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.108.64/26 handle="k8s-pod-network.b185e6133c99bac0be693f2f34ab46455874d19f5688dfa5cc25be649bc1d216" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:43.547352 containerd[1875]: 2025-09-12 17:23:43.499 [INFO][4563] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.108.65/26] block=192.168.108.64/26 handle="k8s-pod-network.b185e6133c99bac0be693f2f34ab46455874d19f5688dfa5cc25be649bc1d216" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:43.547352 containerd[1875]: 2025-09-12 17:23:43.499 [INFO][4563] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.65/26] handle="k8s-pod-network.b185e6133c99bac0be693f2f34ab46455874d19f5688dfa5cc25be649bc1d216" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:43.547352 containerd[1875]: 2025-09-12 17:23:43.499 [INFO][4563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:23:43.547352 containerd[1875]: 2025-09-12 17:23:43.499 [INFO][4563] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.108.65/26] IPv6=[] ContainerID="b185e6133c99bac0be693f2f34ab46455874d19f5688dfa5cc25be649bc1d216" HandleID="k8s-pod-network.b185e6133c99bac0be693f2f34ab46455874d19f5688dfa5cc25be649bc1d216" Workload="ci--4426.1.0--a--9410d45923-k8s-whisker--574b9c8d6--x25x9-eth0" Sep 12 17:23:43.547448 containerd[1875]: 2025-09-12 17:23:43.501 [INFO][4550] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b185e6133c99bac0be693f2f34ab46455874d19f5688dfa5cc25be649bc1d216" Namespace="calico-system" Pod="whisker-574b9c8d6-x25x9" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-whisker--574b9c8d6--x25x9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--a--9410d45923-k8s-whisker--574b9c8d6--x25x9-eth0", GenerateName:"whisker-574b9c8d6-", Namespace:"calico-system", SelfLink:"", UID:"01b017e2-f391-4097-ae71-f6831f6abc6e", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 23, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"574b9c8d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-a-9410d45923", ContainerID:"", Pod:"whisker-574b9c8d6-x25x9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.108.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2267dfae6b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:23:43.547448 containerd[1875]: 2025-09-12 17:23:43.502 [INFO][4550] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.65/32] ContainerID="b185e6133c99bac0be693f2f34ab46455874d19f5688dfa5cc25be649bc1d216" Namespace="calico-system" Pod="whisker-574b9c8d6-x25x9" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-whisker--574b9c8d6--x25x9-eth0" Sep 12 17:23:43.548019 containerd[1875]: 2025-09-12 17:23:43.502 [INFO][4550] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2267dfae6b1 ContainerID="b185e6133c99bac0be693f2f34ab46455874d19f5688dfa5cc25be649bc1d216" Namespace="calico-system" Pod="whisker-574b9c8d6-x25x9" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-whisker--574b9c8d6--x25x9-eth0" Sep 12 17:23:43.548019 containerd[1875]: 2025-09-12 17:23:43.527 [INFO][4550] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b185e6133c99bac0be693f2f34ab46455874d19f5688dfa5cc25be649bc1d216" Namespace="calico-system" Pod="whisker-574b9c8d6-x25x9" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-whisker--574b9c8d6--x25x9-eth0" Sep 12 17:23:43.548059 containerd[1875]: 2025-09-12 17:23:43.527 [INFO][4550] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b185e6133c99bac0be693f2f34ab46455874d19f5688dfa5cc25be649bc1d216" Namespace="calico-system" Pod="whisker-574b9c8d6-x25x9" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-whisker--574b9c8d6--x25x9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--a--9410d45923-k8s-whisker--574b9c8d6--x25x9-eth0", GenerateName:"whisker-574b9c8d6-", Namespace:"calico-system", SelfLink:"", UID:"01b017e2-f391-4097-ae71-f6831f6abc6e", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 23, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"574b9c8d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-a-9410d45923", ContainerID:"b185e6133c99bac0be693f2f34ab46455874d19f5688dfa5cc25be649bc1d216", Pod:"whisker-574b9c8d6-x25x9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.108.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2267dfae6b1", MAC:"66:45:7c:12:89:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:23:43.548098 containerd[1875]: 2025-09-12 17:23:43.544 [INFO][4550] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b185e6133c99bac0be693f2f34ab46455874d19f5688dfa5cc25be649bc1d216" Namespace="calico-system" Pod="whisker-574b9c8d6-x25x9" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-whisker--574b9c8d6--x25x9-eth0" Sep 12 17:23:43.587497 containerd[1875]: time="2025-09-12T17:23:43.587082497Z" level=info msg="connecting to shim b185e6133c99bac0be693f2f34ab46455874d19f5688dfa5cc25be649bc1d216" address="unix:///run/containerd/s/7950f366a9012b7d2794e493caf8fdd5dd01b8a84d7b594a84d9435f45ffbde8" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:23:43.607610 systemd[1]: Started cri-containerd-b185e6133c99bac0be693f2f34ab46455874d19f5688dfa5cc25be649bc1d216.scope - libcontainer container b185e6133c99bac0be693f2f34ab46455874d19f5688dfa5cc25be649bc1d216. Sep 12 17:23:43.635806 containerd[1875]: time="2025-09-12T17:23:43.635764423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-574b9c8d6-x25x9,Uid:01b017e2-f391-4097-ae71-f6831f6abc6e,Namespace:calico-system,Attempt:0,} returns sandbox id \"b185e6133c99bac0be693f2f34ab46455874d19f5688dfa5cc25be649bc1d216\"" Sep 12 17:23:43.637603 containerd[1875]: time="2025-09-12T17:23:43.637581497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:23:43.741549 kubelet[3398]: I0912 17:23:43.741517 3398 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="650631bb-8718-4683-8b4b-952296ecec5b" path="/var/lib/kubelet/pods/650631bb-8718-4683-8b4b-952296ecec5b/volumes" Sep 12 17:23:44.590323 kubelet[3398]: I0912 17:23:44.590288 3398 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:23:45.005606 containerd[1875]: time="2025-09-12T17:23:45.005488529Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:45.008594 containerd[1875]: time="2025-09-12T17:23:45.008563760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 12 17:23:45.012233 containerd[1875]: time="2025-09-12T17:23:45.012190579Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:45.015826 containerd[1875]: time="2025-09-12T17:23:45.015785829Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:45.016461 containerd[1875]: time="2025-09-12T17:23:45.016165051Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.378559961s" Sep 12 17:23:45.016461 containerd[1875]: time="2025-09-12T17:23:45.016190756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 12 17:23:45.018966 containerd[1875]: time="2025-09-12T17:23:45.018945815Z" level=info msg="CreateContainer within sandbox \"b185e6133c99bac0be693f2f34ab46455874d19f5688dfa5cc25be649bc1d216\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:23:45.037652 containerd[1875]: time="2025-09-12T17:23:45.037625851Z" level=info msg="Container 5a3274dd084bef05616d253c30b513e6d4a1b504f16d4e6ed94e0821cf3d3975: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:23:45.040932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3295458067.mount: Deactivated successfully. Sep 12 17:23:45.054932 containerd[1875]: time="2025-09-12T17:23:45.054857970Z" level=info msg="CreateContainer within sandbox \"b185e6133c99bac0be693f2f34ab46455874d19f5688dfa5cc25be649bc1d216\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"5a3274dd084bef05616d253c30b513e6d4a1b504f16d4e6ed94e0821cf3d3975\"" Sep 12 17:23:45.055337 containerd[1875]: time="2025-09-12T17:23:45.055318330Z" level=info msg="StartContainer for \"5a3274dd084bef05616d253c30b513e6d4a1b504f16d4e6ed94e0821cf3d3975\"" Sep 12 17:23:45.056245 containerd[1875]: time="2025-09-12T17:23:45.056216947Z" level=info msg="connecting to shim 5a3274dd084bef05616d253c30b513e6d4a1b504f16d4e6ed94e0821cf3d3975" address="unix:///run/containerd/s/7950f366a9012b7d2794e493caf8fdd5dd01b8a84d7b594a84d9435f45ffbde8" protocol=ttrpc version=3 Sep 12 17:23:45.075594 systemd[1]: Started cri-containerd-5a3274dd084bef05616d253c30b513e6d4a1b504f16d4e6ed94e0821cf3d3975.scope - libcontainer container 5a3274dd084bef05616d253c30b513e6d4a1b504f16d4e6ed94e0821cf3d3975. Sep 12 17:23:45.095605 systemd-networkd[1679]: cali2267dfae6b1: Gained IPv6LL Sep 12 17:23:45.116890 containerd[1875]: time="2025-09-12T17:23:45.116711069Z" level=info msg="StartContainer for \"5a3274dd084bef05616d253c30b513e6d4a1b504f16d4e6ed94e0821cf3d3975\" returns successfully" Sep 12 17:23:45.117521 containerd[1875]: time="2025-09-12T17:23:45.117503274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:23:45.702745 systemd-networkd[1679]: vxlan.calico: Link UP Sep 12 17:23:45.702753 systemd-networkd[1679]: vxlan.calico: Gained carrier Sep 12 17:23:45.740748 containerd[1875]: time="2025-09-12T17:23:45.740507962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-qzvb7,Uid:9d32a396-40df-4829-9715-f7b972ecd86c,Namespace:calico-system,Attempt:0,}" Sep 12 17:23:45.849075 systemd-networkd[1679]: calide4877d70d3: Link UP Sep 12 17:23:45.849581 systemd-networkd[1679]: calide4877d70d3: Gained carrier Sep 12 17:23:45.874261 containerd[1875]: 2025-09-12 17:23:45.783 [INFO][4776] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4426.1.0--a--9410d45923-k8s-goldmane--54d579b49d--qzvb7-eth0 goldmane-54d579b49d- calico-system 9d32a396-40df-4829-9715-f7b972ecd86c 829 0 2025-09-12 17:23:23 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4426.1.0-a-9410d45923 goldmane-54d579b49d-qzvb7 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calide4877d70d3 [] [] }} ContainerID="64dda442d6bfda75bb2a15e3ee61f2723a6619761bdc27b100a219a42ef0f06a" Namespace="calico-system" Pod="goldmane-54d579b49d-qzvb7" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-goldmane--54d579b49d--qzvb7-" Sep 12 17:23:45.874261 containerd[1875]: 2025-09-12 17:23:45.783 [INFO][4776] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="64dda442d6bfda75bb2a15e3ee61f2723a6619761bdc27b100a219a42ef0f06a" Namespace="calico-system" Pod="goldmane-54d579b49d-qzvb7" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-goldmane--54d579b49d--qzvb7-eth0" Sep 12 17:23:45.874261 containerd[1875]: 2025-09-12 17:23:45.804 [INFO][4788] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="64dda442d6bfda75bb2a15e3ee61f2723a6619761bdc27b100a219a42ef0f06a" HandleID="k8s-pod-network.64dda442d6bfda75bb2a15e3ee61f2723a6619761bdc27b100a219a42ef0f06a" Workload="ci--4426.1.0--a--9410d45923-k8s-goldmane--54d579b49d--qzvb7-eth0" Sep 12 17:23:45.874441 containerd[1875]: 2025-09-12 17:23:45.804 [INFO][4788] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="64dda442d6bfda75bb2a15e3ee61f2723a6619761bdc27b100a219a42ef0f06a" HandleID="k8s-pod-network.64dda442d6bfda75bb2a15e3ee61f2723a6619761bdc27b100a219a42ef0f06a" Workload="ci--4426.1.0--a--9410d45923-k8s-goldmane--54d579b49d--qzvb7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3080), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4426.1.0-a-9410d45923", "pod":"goldmane-54d579b49d-qzvb7", "timestamp":"2025-09-12 17:23:45.804157199 +0000 UTC"}, Hostname:"ci-4426.1.0-a-9410d45923", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:23:45.874441 containerd[1875]: 2025-09-12 17:23:45.804 [INFO][4788] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:23:45.874441 containerd[1875]: 2025-09-12 17:23:45.804 [INFO][4788] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:23:45.874441 containerd[1875]: 2025-09-12 17:23:45.804 [INFO][4788] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4426.1.0-a-9410d45923' Sep 12 17:23:45.874441 containerd[1875]: 2025-09-12 17:23:45.809 [INFO][4788] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.64dda442d6bfda75bb2a15e3ee61f2723a6619761bdc27b100a219a42ef0f06a" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:45.874441 containerd[1875]: 2025-09-12 17:23:45.812 [INFO][4788] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:45.874441 containerd[1875]: 2025-09-12 17:23:45.816 [INFO][4788] ipam/ipam.go 511: Trying affinity for 192.168.108.64/26 host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:45.874441 containerd[1875]: 2025-09-12 17:23:45.818 [INFO][4788] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.64/26 host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:45.874441 containerd[1875]: 2025-09-12 17:23:45.820 [INFO][4788] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.64/26 host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:45.874589 containerd[1875]: 2025-09-12 17:23:45.820 [INFO][4788] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.108.64/26 handle="k8s-pod-network.64dda442d6bfda75bb2a15e3ee61f2723a6619761bdc27b100a219a42ef0f06a" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:45.874589 containerd[1875]: 2025-09-12 17:23:45.822 [INFO][4788] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.64dda442d6bfda75bb2a15e3ee61f2723a6619761bdc27b100a219a42ef0f06a Sep 12 17:23:45.874589 containerd[1875]: 2025-09-12 17:23:45.828 [INFO][4788] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.108.64/26 handle="k8s-pod-network.64dda442d6bfda75bb2a15e3ee61f2723a6619761bdc27b100a219a42ef0f06a" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:45.874589 containerd[1875]: 2025-09-12 17:23:45.841 [INFO][4788] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.108.66/26] block=192.168.108.64/26 handle="k8s-pod-network.64dda442d6bfda75bb2a15e3ee61f2723a6619761bdc27b100a219a42ef0f06a" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:45.874589 containerd[1875]: 2025-09-12 17:23:45.842 [INFO][4788] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.66/26] handle="k8s-pod-network.64dda442d6bfda75bb2a15e3ee61f2723a6619761bdc27b100a219a42ef0f06a" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:45.874589 containerd[1875]: 2025-09-12 17:23:45.842 [INFO][4788] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:23:45.874589 containerd[1875]: 2025-09-12 17:23:45.842 [INFO][4788] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.108.66/26] IPv6=[] ContainerID="64dda442d6bfda75bb2a15e3ee61f2723a6619761bdc27b100a219a42ef0f06a" HandleID="k8s-pod-network.64dda442d6bfda75bb2a15e3ee61f2723a6619761bdc27b100a219a42ef0f06a" Workload="ci--4426.1.0--a--9410d45923-k8s-goldmane--54d579b49d--qzvb7-eth0" Sep 12 17:23:45.875180 containerd[1875]: 2025-09-12 17:23:45.844 [INFO][4776] cni-plugin/k8s.go 418: Populated endpoint ContainerID="64dda442d6bfda75bb2a15e3ee61f2723a6619761bdc27b100a219a42ef0f06a" Namespace="calico-system" Pod="goldmane-54d579b49d-qzvb7" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-goldmane--54d579b49d--qzvb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--a--9410d45923-k8s-goldmane--54d579b49d--qzvb7-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"9d32a396-40df-4829-9715-f7b972ecd86c", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 23, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-a-9410d45923", ContainerID:"", Pod:"goldmane-54d579b49d-qzvb7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.108.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calide4877d70d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:23:45.875273 containerd[1875]: 2025-09-12 17:23:45.844 [INFO][4776] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.66/32] ContainerID="64dda442d6bfda75bb2a15e3ee61f2723a6619761bdc27b100a219a42ef0f06a" Namespace="calico-system" Pod="goldmane-54d579b49d-qzvb7" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-goldmane--54d579b49d--qzvb7-eth0" Sep 12 17:23:45.875273 containerd[1875]: 2025-09-12 17:23:45.844 [INFO][4776] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide4877d70d3 ContainerID="64dda442d6bfda75bb2a15e3ee61f2723a6619761bdc27b100a219a42ef0f06a" Namespace="calico-system" Pod="goldmane-54d579b49d-qzvb7" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-goldmane--54d579b49d--qzvb7-eth0" Sep 12 17:23:45.875273 containerd[1875]: 2025-09-12 17:23:45.850 [INFO][4776] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="64dda442d6bfda75bb2a15e3ee61f2723a6619761bdc27b100a219a42ef0f06a" Namespace="calico-system" Pod="goldmane-54d579b49d-qzvb7" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-goldmane--54d579b49d--qzvb7-eth0" Sep 12 17:23:45.875354 containerd[1875]: 2025-09-12 17:23:45.851 [INFO][4776] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="64dda442d6bfda75bb2a15e3ee61f2723a6619761bdc27b100a219a42ef0f06a" Namespace="calico-system" Pod="goldmane-54d579b49d-qzvb7" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-goldmane--54d579b49d--qzvb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--a--9410d45923-k8s-goldmane--54d579b49d--qzvb7-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"9d32a396-40df-4829-9715-f7b972ecd86c", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 23, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-a-9410d45923", ContainerID:"64dda442d6bfda75bb2a15e3ee61f2723a6619761bdc27b100a219a42ef0f06a", Pod:"goldmane-54d579b49d-qzvb7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.108.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calide4877d70d3", MAC:"ee:58:75:4e:34:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:23:45.875394 containerd[1875]: 2025-09-12 17:23:45.871 [INFO][4776] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="64dda442d6bfda75bb2a15e3ee61f2723a6619761bdc27b100a219a42ef0f06a" Namespace="calico-system" Pod="goldmane-54d579b49d-qzvb7" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-goldmane--54d579b49d--qzvb7-eth0" Sep 12 17:23:45.910104 containerd[1875]: time="2025-09-12T17:23:45.910068916Z" level=info msg="connecting to shim 64dda442d6bfda75bb2a15e3ee61f2723a6619761bdc27b100a219a42ef0f06a" address="unix:///run/containerd/s/bc7b11977a2b683de997165b33a8048876665399d4e69c1e5c394c645dcf74d6" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:23:45.924696 systemd[1]: Started cri-containerd-64dda442d6bfda75bb2a15e3ee61f2723a6619761bdc27b100a219a42ef0f06a.scope - libcontainer container 64dda442d6bfda75bb2a15e3ee61f2723a6619761bdc27b100a219a42ef0f06a. Sep 12 17:23:45.956482 containerd[1875]: time="2025-09-12T17:23:45.956228712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-qzvb7,Uid:9d32a396-40df-4829-9715-f7b972ecd86c,Namespace:calico-system,Attempt:0,} returns sandbox id \"64dda442d6bfda75bb2a15e3ee61f2723a6619761bdc27b100a219a42ef0f06a\"" Sep 12 17:23:47.399647 systemd-networkd[1679]: vxlan.calico: Gained IPv6LL Sep 12 17:23:47.655966 systemd-networkd[1679]: calide4877d70d3: Gained IPv6LL Sep 12 17:23:47.739720 containerd[1875]: time="2025-09-12T17:23:47.739669403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ff6f6f7b7-fffcq,Uid:993d4da9-2993-4f01-b49e-29cd40727590,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:23:47.868485 systemd-networkd[1679]: cali2677b2502bb: Link UP Sep 12 17:23:47.868638 systemd-networkd[1679]: cali2677b2502bb: Gained carrier Sep 12 17:23:47.888834 containerd[1875]: 2025-09-12 17:23:47.799 [INFO][4896] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4426.1.0--a--9410d45923-k8s-calico--apiserver--7ff6f6f7b7--fffcq-eth0 calico-apiserver-7ff6f6f7b7- calico-apiserver 993d4da9-2993-4f01-b49e-29cd40727590 822 0 2025-09-12 17:23:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7ff6f6f7b7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4426.1.0-a-9410d45923 calico-apiserver-7ff6f6f7b7-fffcq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2677b2502bb [] [] }} ContainerID="b1bb38c6e7e9e81df6f5816dd18c19db678d693ece47d6159e5cba577c49d0d2" Namespace="calico-apiserver" Pod="calico-apiserver-7ff6f6f7b7-fffcq" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-calico--apiserver--7ff6f6f7b7--fffcq-" Sep 12 17:23:47.888834 containerd[1875]: 2025-09-12 17:23:47.799 [INFO][4896] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b1bb38c6e7e9e81df6f5816dd18c19db678d693ece47d6159e5cba577c49d0d2" Namespace="calico-apiserver" Pod="calico-apiserver-7ff6f6f7b7-fffcq" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-calico--apiserver--7ff6f6f7b7--fffcq-eth0" Sep 12 17:23:47.888834 containerd[1875]: 2025-09-12 17:23:47.826 [INFO][4908] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b1bb38c6e7e9e81df6f5816dd18c19db678d693ece47d6159e5cba577c49d0d2" HandleID="k8s-pod-network.b1bb38c6e7e9e81df6f5816dd18c19db678d693ece47d6159e5cba577c49d0d2" Workload="ci--4426.1.0--a--9410d45923-k8s-calico--apiserver--7ff6f6f7b7--fffcq-eth0" Sep 12 17:23:47.889030 containerd[1875]: 2025-09-12 17:23:47.826 [INFO][4908] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b1bb38c6e7e9e81df6f5816dd18c19db678d693ece47d6159e5cba577c49d0d2" HandleID="k8s-pod-network.b1bb38c6e7e9e81df6f5816dd18c19db678d693ece47d6159e5cba577c49d0d2" Workload="ci--4426.1.0--a--9410d45923-k8s-calico--apiserver--7ff6f6f7b7--fffcq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4426.1.0-a-9410d45923", "pod":"calico-apiserver-7ff6f6f7b7-fffcq", "timestamp":"2025-09-12 17:23:47.826110744 +0000 UTC"}, Hostname:"ci-4426.1.0-a-9410d45923", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:23:47.889030 containerd[1875]: 2025-09-12 17:23:47.826 [INFO][4908] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:23:47.889030 containerd[1875]: 2025-09-12 17:23:47.826 [INFO][4908] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:23:47.889030 containerd[1875]: 2025-09-12 17:23:47.826 [INFO][4908] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4426.1.0-a-9410d45923' Sep 12 17:23:47.889030 containerd[1875]: 2025-09-12 17:23:47.832 [INFO][4908] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b1bb38c6e7e9e81df6f5816dd18c19db678d693ece47d6159e5cba577c49d0d2" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:47.889030 containerd[1875]: 2025-09-12 17:23:47.835 [INFO][4908] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:47.889030 containerd[1875]: 2025-09-12 17:23:47.840 [INFO][4908] ipam/ipam.go 511: Trying affinity for 192.168.108.64/26 host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:47.889030 containerd[1875]: 2025-09-12 17:23:47.841 [INFO][4908] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.64/26 host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:47.889030 containerd[1875]: 2025-09-12 17:23:47.843 [INFO][4908] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.64/26 host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:47.889410 containerd[1875]: 2025-09-12 17:23:47.843 [INFO][4908] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.108.64/26 handle="k8s-pod-network.b1bb38c6e7e9e81df6f5816dd18c19db678d693ece47d6159e5cba577c49d0d2" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:47.889410 containerd[1875]: 2025-09-12 17:23:47.844 [INFO][4908] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b1bb38c6e7e9e81df6f5816dd18c19db678d693ece47d6159e5cba577c49d0d2 Sep 12 17:23:47.889410 containerd[1875]: 2025-09-12 17:23:47.849 [INFO][4908] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.108.64/26 handle="k8s-pod-network.b1bb38c6e7e9e81df6f5816dd18c19db678d693ece47d6159e5cba577c49d0d2" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:47.889410 containerd[1875]: 2025-09-12 17:23:47.860 [INFO][4908] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.108.67/26] block=192.168.108.64/26 handle="k8s-pod-network.b1bb38c6e7e9e81df6f5816dd18c19db678d693ece47d6159e5cba577c49d0d2" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:47.889410 containerd[1875]: 2025-09-12 17:23:47.860 [INFO][4908] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.67/26] handle="k8s-pod-network.b1bb38c6e7e9e81df6f5816dd18c19db678d693ece47d6159e5cba577c49d0d2" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:47.889410 containerd[1875]: 2025-09-12 17:23:47.860 [INFO][4908] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:23:47.889410 containerd[1875]: 2025-09-12 17:23:47.861 [INFO][4908] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.108.67/26] IPv6=[] ContainerID="b1bb38c6e7e9e81df6f5816dd18c19db678d693ece47d6159e5cba577c49d0d2" HandleID="k8s-pod-network.b1bb38c6e7e9e81df6f5816dd18c19db678d693ece47d6159e5cba577c49d0d2" Workload="ci--4426.1.0--a--9410d45923-k8s-calico--apiserver--7ff6f6f7b7--fffcq-eth0" Sep 12 17:23:47.889696 containerd[1875]: 2025-09-12 17:23:47.864 [INFO][4896] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b1bb38c6e7e9e81df6f5816dd18c19db678d693ece47d6159e5cba577c49d0d2" Namespace="calico-apiserver" Pod="calico-apiserver-7ff6f6f7b7-fffcq" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-calico--apiserver--7ff6f6f7b7--fffcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--a--9410d45923-k8s-calico--apiserver--7ff6f6f7b7--fffcq-eth0", GenerateName:"calico-apiserver-7ff6f6f7b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"993d4da9-2993-4f01-b49e-29cd40727590", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 23, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ff6f6f7b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-a-9410d45923", ContainerID:"", Pod:"calico-apiserver-7ff6f6f7b7-fffcq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2677b2502bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:23:47.889844 containerd[1875]: 2025-09-12 17:23:47.864 [INFO][4896] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.67/32] ContainerID="b1bb38c6e7e9e81df6f5816dd18c19db678d693ece47d6159e5cba577c49d0d2" Namespace="calico-apiserver" Pod="calico-apiserver-7ff6f6f7b7-fffcq" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-calico--apiserver--7ff6f6f7b7--fffcq-eth0" Sep 12 17:23:47.889844 containerd[1875]: 2025-09-12 17:23:47.864 [INFO][4896] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2677b2502bb ContainerID="b1bb38c6e7e9e81df6f5816dd18c19db678d693ece47d6159e5cba577c49d0d2" Namespace="calico-apiserver" Pod="calico-apiserver-7ff6f6f7b7-fffcq" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-calico--apiserver--7ff6f6f7b7--fffcq-eth0" Sep 12 17:23:47.889844 containerd[1875]: 2025-09-12 17:23:47.869 [INFO][4896] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b1bb38c6e7e9e81df6f5816dd18c19db678d693ece47d6159e5cba577c49d0d2" Namespace="calico-apiserver" Pod="calico-apiserver-7ff6f6f7b7-fffcq" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-calico--apiserver--7ff6f6f7b7--fffcq-eth0" Sep 12 17:23:47.889923 containerd[1875]: 2025-09-12 17:23:47.871 [INFO][4896] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b1bb38c6e7e9e81df6f5816dd18c19db678d693ece47d6159e5cba577c49d0d2" Namespace="calico-apiserver" Pod="calico-apiserver-7ff6f6f7b7-fffcq" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-calico--apiserver--7ff6f6f7b7--fffcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--a--9410d45923-k8s-calico--apiserver--7ff6f6f7b7--fffcq-eth0", GenerateName:"calico-apiserver-7ff6f6f7b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"993d4da9-2993-4f01-b49e-29cd40727590", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 23, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ff6f6f7b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-a-9410d45923", ContainerID:"b1bb38c6e7e9e81df6f5816dd18c19db678d693ece47d6159e5cba577c49d0d2", Pod:"calico-apiserver-7ff6f6f7b7-fffcq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2677b2502bb", MAC:"e6:73:a5:b8:b8:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:23:47.889964 containerd[1875]: 2025-09-12 17:23:47.886 [INFO][4896] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b1bb38c6e7e9e81df6f5816dd18c19db678d693ece47d6159e5cba577c49d0d2" Namespace="calico-apiserver" Pod="calico-apiserver-7ff6f6f7b7-fffcq" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-calico--apiserver--7ff6f6f7b7--fffcq-eth0" Sep 12 17:23:47.941170 containerd[1875]: time="2025-09-12T17:23:47.940628683Z" level=info msg="connecting to shim b1bb38c6e7e9e81df6f5816dd18c19db678d693ece47d6159e5cba577c49d0d2" address="unix:///run/containerd/s/e3fb56f280a210fcc727f228f89cf69bcdf4a63f67c208c1ab1cc96764b76236" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:23:47.976642 systemd[1]: Started cri-containerd-b1bb38c6e7e9e81df6f5816dd18c19db678d693ece47d6159e5cba577c49d0d2.scope - libcontainer container b1bb38c6e7e9e81df6f5816dd18c19db678d693ece47d6159e5cba577c49d0d2. Sep 12 17:23:48.024758 containerd[1875]: time="2025-09-12T17:23:48.024723715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ff6f6f7b7-fffcq,Uid:993d4da9-2993-4f01-b49e-29cd40727590,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b1bb38c6e7e9e81df6f5816dd18c19db678d693ece47d6159e5cba577c49d0d2\"" Sep 12 17:23:48.073166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3777860315.mount: Deactivated successfully. Sep 12 17:23:48.344988 containerd[1875]: time="2025-09-12T17:23:48.344494730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:48.347105 containerd[1875]: time="2025-09-12T17:23:48.347073800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 12 17:23:48.350627 containerd[1875]: time="2025-09-12T17:23:48.350582967Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:48.354322 containerd[1875]: time="2025-09-12T17:23:48.354297045Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:48.354934 containerd[1875]: time="2025-09-12T17:23:48.354634921Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 3.237111662s" Sep 12 17:23:48.354934 containerd[1875]: time="2025-09-12T17:23:48.354663370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 12 17:23:48.356180 containerd[1875]: time="2025-09-12T17:23:48.356157480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:23:48.357198 containerd[1875]: time="2025-09-12T17:23:48.357172253Z" level=info msg="CreateContainer within sandbox \"b185e6133c99bac0be693f2f34ab46455874d19f5688dfa5cc25be649bc1d216\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:23:48.374522 containerd[1875]: time="2025-09-12T17:23:48.374498183Z" level=info msg="Container 7a376ba9b66c21f2fd0609ad611195154bd25802189f20d969e6b9a7e0e3f9b4: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:23:48.389657 containerd[1875]: time="2025-09-12T17:23:48.389625842Z" level=info msg="CreateContainer within sandbox \"b185e6133c99bac0be693f2f34ab46455874d19f5688dfa5cc25be649bc1d216\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"7a376ba9b66c21f2fd0609ad611195154bd25802189f20d969e6b9a7e0e3f9b4\"" Sep 12 17:23:48.390487 containerd[1875]: time="2025-09-12T17:23:48.390181118Z" level=info msg="StartContainer for \"7a376ba9b66c21f2fd0609ad611195154bd25802189f20d969e6b9a7e0e3f9b4\"" Sep 12 17:23:48.391319 containerd[1875]: time="2025-09-12T17:23:48.391295382Z" level=info msg="connecting to shim 7a376ba9b66c21f2fd0609ad611195154bd25802189f20d969e6b9a7e0e3f9b4" address="unix:///run/containerd/s/7950f366a9012b7d2794e493caf8fdd5dd01b8a84d7b594a84d9435f45ffbde8" protocol=ttrpc version=3 Sep 12 17:23:48.410601 systemd[1]: Started cri-containerd-7a376ba9b66c21f2fd0609ad611195154bd25802189f20d969e6b9a7e0e3f9b4.scope - libcontainer container 7a376ba9b66c21f2fd0609ad611195154bd25802189f20d969e6b9a7e0e3f9b4. Sep 12 17:23:48.445375 containerd[1875]: time="2025-09-12T17:23:48.445328415Z" level=info msg="StartContainer for \"7a376ba9b66c21f2fd0609ad611195154bd25802189f20d969e6b9a7e0e3f9b4\" returns successfully" Sep 12 17:23:48.740319 containerd[1875]: time="2025-09-12T17:23:48.739970418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dcbdd784d-pcpqj,Uid:b6e1a7a1-e091-4016-a23d-41642e3383a2,Namespace:calico-system,Attempt:0,}" Sep 12 17:23:48.740820 containerd[1875]: time="2025-09-12T17:23:48.740804296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rvwsb,Uid:4ba03ff9-6320-42fb-94ed-e6ab8f2858f4,Namespace:kube-system,Attempt:0,}" Sep 12 17:23:48.848708 systemd-networkd[1679]: cali0095a9bfb48: Link UP Sep 12 17:23:48.849941 systemd-networkd[1679]: cali0095a9bfb48: Gained carrier Sep 12 17:23:48.867915 containerd[1875]: 2025-09-12 17:23:48.782 [INFO][5004] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4426.1.0--a--9410d45923-k8s-calico--kube--controllers--7dcbdd784d--pcpqj-eth0 calico-kube-controllers-7dcbdd784d- calico-system b6e1a7a1-e091-4016-a23d-41642e3383a2 825 0 2025-09-12 17:23:22 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7dcbdd784d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4426.1.0-a-9410d45923 calico-kube-controllers-7dcbdd784d-pcpqj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0095a9bfb48 [] [] }} ContainerID="9f671319fa3a441b31f5a82800657b8b1095999392ec4e3adff80a035690cf3d" Namespace="calico-system" Pod="calico-kube-controllers-7dcbdd784d-pcpqj" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-calico--kube--controllers--7dcbdd784d--pcpqj-" Sep 12 17:23:48.867915 containerd[1875]: 2025-09-12 17:23:48.782 [INFO][5004] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9f671319fa3a441b31f5a82800657b8b1095999392ec4e3adff80a035690cf3d" Namespace="calico-system" Pod="calico-kube-controllers-7dcbdd784d-pcpqj" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-calico--kube--controllers--7dcbdd784d--pcpqj-eth0" Sep 12 17:23:48.867915 containerd[1875]: 2025-09-12 17:23:48.807 [INFO][5027] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f671319fa3a441b31f5a82800657b8b1095999392ec4e3adff80a035690cf3d" HandleID="k8s-pod-network.9f671319fa3a441b31f5a82800657b8b1095999392ec4e3adff80a035690cf3d" Workload="ci--4426.1.0--a--9410d45923-k8s-calico--kube--controllers--7dcbdd784d--pcpqj-eth0" Sep 12 17:23:48.868065 containerd[1875]: 2025-09-12 17:23:48.807 [INFO][5027] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9f671319fa3a441b31f5a82800657b8b1095999392ec4e3adff80a035690cf3d" HandleID="k8s-pod-network.9f671319fa3a441b31f5a82800657b8b1095999392ec4e3adff80a035690cf3d" Workload="ci--4426.1.0--a--9410d45923-k8s-calico--kube--controllers--7dcbdd784d--pcpqj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b8b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4426.1.0-a-9410d45923", "pod":"calico-kube-controllers-7dcbdd784d-pcpqj", "timestamp":"2025-09-12 17:23:48.807236361 +0000 UTC"}, Hostname:"ci-4426.1.0-a-9410d45923", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:23:48.868065 containerd[1875]: 2025-09-12 17:23:48.807 [INFO][5027] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:23:48.868065 containerd[1875]: 2025-09-12 17:23:48.807 [INFO][5027] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:23:48.868065 containerd[1875]: 2025-09-12 17:23:48.807 [INFO][5027] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4426.1.0-a-9410d45923' Sep 12 17:23:48.868065 containerd[1875]: 2025-09-12 17:23:48.813 [INFO][5027] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9f671319fa3a441b31f5a82800657b8b1095999392ec4e3adff80a035690cf3d" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:48.868065 containerd[1875]: 2025-09-12 17:23:48.816 [INFO][5027] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:48.868065 containerd[1875]: 2025-09-12 17:23:48.820 [INFO][5027] ipam/ipam.go 511: Trying affinity for 192.168.108.64/26 host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:48.868065 containerd[1875]: 2025-09-12 17:23:48.821 [INFO][5027] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.64/26 host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:48.868065 containerd[1875]: 2025-09-12 17:23:48.823 [INFO][5027] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.64/26 host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:48.868959 containerd[1875]: 2025-09-12 17:23:48.823 [INFO][5027] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.108.64/26 handle="k8s-pod-network.9f671319fa3a441b31f5a82800657b8b1095999392ec4e3adff80a035690cf3d" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:48.868959 containerd[1875]: 2025-09-12 17:23:48.824 [INFO][5027] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9f671319fa3a441b31f5a82800657b8b1095999392ec4e3adff80a035690cf3d Sep 12 17:23:48.868959 containerd[1875]: 2025-09-12 17:23:48.829 [INFO][5027] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.108.64/26 handle="k8s-pod-network.9f671319fa3a441b31f5a82800657b8b1095999392ec4e3adff80a035690cf3d" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:48.868959 containerd[1875]: 2025-09-12 17:23:48.838 [INFO][5027] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.108.68/26] block=192.168.108.64/26 handle="k8s-pod-network.9f671319fa3a441b31f5a82800657b8b1095999392ec4e3adff80a035690cf3d" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:48.868959 containerd[1875]: 2025-09-12 17:23:48.838 [INFO][5027] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.68/26] handle="k8s-pod-network.9f671319fa3a441b31f5a82800657b8b1095999392ec4e3adff80a035690cf3d" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:48.868959 containerd[1875]: 2025-09-12 17:23:48.838 [INFO][5027] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:23:48.868959 containerd[1875]: 2025-09-12 17:23:48.838 [INFO][5027] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.108.68/26] IPv6=[] ContainerID="9f671319fa3a441b31f5a82800657b8b1095999392ec4e3adff80a035690cf3d" HandleID="k8s-pod-network.9f671319fa3a441b31f5a82800657b8b1095999392ec4e3adff80a035690cf3d" Workload="ci--4426.1.0--a--9410d45923-k8s-calico--kube--controllers--7dcbdd784d--pcpqj-eth0" Sep 12 17:23:48.869060 containerd[1875]: 2025-09-12 17:23:48.840 [INFO][5004] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9f671319fa3a441b31f5a82800657b8b1095999392ec4e3adff80a035690cf3d" Namespace="calico-system" Pod="calico-kube-controllers-7dcbdd784d-pcpqj" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-calico--kube--controllers--7dcbdd784d--pcpqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--a--9410d45923-k8s-calico--kube--controllers--7dcbdd784d--pcpqj-eth0", GenerateName:"calico-kube-controllers-7dcbdd784d-", Namespace:"calico-system", SelfLink:"", UID:"b6e1a7a1-e091-4016-a23d-41642e3383a2", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 23, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dcbdd784d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-a-9410d45923", ContainerID:"", Pod:"calico-kube-controllers-7dcbdd784d-pcpqj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.108.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0095a9bfb48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:23:48.869105 containerd[1875]: 2025-09-12 17:23:48.841 [INFO][5004] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.68/32] ContainerID="9f671319fa3a441b31f5a82800657b8b1095999392ec4e3adff80a035690cf3d" Namespace="calico-system" Pod="calico-kube-controllers-7dcbdd784d-pcpqj" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-calico--kube--controllers--7dcbdd784d--pcpqj-eth0" Sep 12 17:23:48.869105 containerd[1875]: 2025-09-12 17:23:48.841 [INFO][5004] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0095a9bfb48 ContainerID="9f671319fa3a441b31f5a82800657b8b1095999392ec4e3adff80a035690cf3d" Namespace="calico-system" Pod="calico-kube-controllers-7dcbdd784d-pcpqj" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-calico--kube--controllers--7dcbdd784d--pcpqj-eth0" Sep 12 17:23:48.869105 containerd[1875]: 2025-09-12 17:23:48.849 [INFO][5004] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f671319fa3a441b31f5a82800657b8b1095999392ec4e3adff80a035690cf3d" Namespace="calico-system" Pod="calico-kube-controllers-7dcbdd784d-pcpqj" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-calico--kube--controllers--7dcbdd784d--pcpqj-eth0" Sep 12 17:23:48.869150 containerd[1875]: 2025-09-12 17:23:48.852 [INFO][5004] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9f671319fa3a441b31f5a82800657b8b1095999392ec4e3adff80a035690cf3d" Namespace="calico-system" Pod="calico-kube-controllers-7dcbdd784d-pcpqj" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-calico--kube--controllers--7dcbdd784d--pcpqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--a--9410d45923-k8s-calico--kube--controllers--7dcbdd784d--pcpqj-eth0", GenerateName:"calico-kube-controllers-7dcbdd784d-", Namespace:"calico-system", SelfLink:"", UID:"b6e1a7a1-e091-4016-a23d-41642e3383a2", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 23, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dcbdd784d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-a-9410d45923", ContainerID:"9f671319fa3a441b31f5a82800657b8b1095999392ec4e3adff80a035690cf3d", Pod:"calico-kube-controllers-7dcbdd784d-pcpqj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.108.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0095a9bfb48", MAC:"9a:44:55:4e:64:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:23:48.869185 containerd[1875]: 2025-09-12 17:23:48.865 [INFO][5004] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9f671319fa3a441b31f5a82800657b8b1095999392ec4e3adff80a035690cf3d" Namespace="calico-system" Pod="calico-kube-controllers-7dcbdd784d-pcpqj" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-calico--kube--controllers--7dcbdd784d--pcpqj-eth0" Sep 12 17:23:48.895578 kubelet[3398]: I0912 17:23:48.895522 3398 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-574b9c8d6-x25x9" podStartSLOduration=2.176988371 podStartE2EDuration="6.895507216s" podCreationTimestamp="2025-09-12 17:23:42 +0000 UTC" firstStartedPulling="2025-09-12 17:23:43.637023061 +0000 UTC m=+40.057940901" lastFinishedPulling="2025-09-12 17:23:48.35554189 +0000 UTC m=+44.776459746" observedRunningTime="2025-09-12 17:23:48.895233406 +0000 UTC m=+45.316151278" watchObservedRunningTime="2025-09-12 17:23:48.895507216 +0000 UTC m=+45.316425064" Sep 12 17:23:48.924931 containerd[1875]: time="2025-09-12T17:23:48.924577587Z" level=info msg="connecting to shim 9f671319fa3a441b31f5a82800657b8b1095999392ec4e3adff80a035690cf3d" address="unix:///run/containerd/s/cc07e4b889b9ceb59375fc3fea6d05a0e03442ffae66b897e6d3dcbca53341c0" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:23:48.950033 systemd[1]: Started cri-containerd-9f671319fa3a441b31f5a82800657b8b1095999392ec4e3adff80a035690cf3d.scope - libcontainer container 9f671319fa3a441b31f5a82800657b8b1095999392ec4e3adff80a035690cf3d. Sep 12 17:23:48.979434 systemd-networkd[1679]: calie62acbdd21a: Link UP Sep 12 17:23:48.980250 systemd-networkd[1679]: calie62acbdd21a: Gained carrier Sep 12 17:23:48.989563 containerd[1875]: time="2025-09-12T17:23:48.989529847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dcbdd784d-pcpqj,Uid:b6e1a7a1-e091-4016-a23d-41642e3383a2,Namespace:calico-system,Attempt:0,} returns sandbox id \"9f671319fa3a441b31f5a82800657b8b1095999392ec4e3adff80a035690cf3d\"" Sep 12 17:23:49.002512 containerd[1875]: 2025-09-12 17:23:48.786 [INFO][5013] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4426.1.0--a--9410d45923-k8s-coredns--668d6bf9bc--rvwsb-eth0 coredns-668d6bf9bc- kube-system 4ba03ff9-6320-42fb-94ed-e6ab8f2858f4 817 0 2025-09-12 17:23:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4426.1.0-a-9410d45923 coredns-668d6bf9bc-rvwsb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie62acbdd21a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6bd134f8526edf8d4e2b66c640824a4afb822c46e31842563ffb9dc64eed8731" Namespace="kube-system" Pod="coredns-668d6bf9bc-rvwsb" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-coredns--668d6bf9bc--rvwsb-" Sep 12 17:23:49.002512 containerd[1875]: 2025-09-12 17:23:48.786 [INFO][5013] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6bd134f8526edf8d4e2b66c640824a4afb822c46e31842563ffb9dc64eed8731" Namespace="kube-system" Pod="coredns-668d6bf9bc-rvwsb" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-coredns--668d6bf9bc--rvwsb-eth0" Sep 12 17:23:49.002512 containerd[1875]: 2025-09-12 17:23:48.807 [INFO][5032] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6bd134f8526edf8d4e2b66c640824a4afb822c46e31842563ffb9dc64eed8731" HandleID="k8s-pod-network.6bd134f8526edf8d4e2b66c640824a4afb822c46e31842563ffb9dc64eed8731" Workload="ci--4426.1.0--a--9410d45923-k8s-coredns--668d6bf9bc--rvwsb-eth0" Sep 12 17:23:49.002654 containerd[1875]: 2025-09-12 17:23:48.807 [INFO][5032] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6bd134f8526edf8d4e2b66c640824a4afb822c46e31842563ffb9dc64eed8731" HandleID="k8s-pod-network.6bd134f8526edf8d4e2b66c640824a4afb822c46e31842563ffb9dc64eed8731" Workload="ci--4426.1.0--a--9410d45923-k8s-coredns--668d6bf9bc--rvwsb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002baff0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4426.1.0-a-9410d45923", "pod":"coredns-668d6bf9bc-rvwsb", "timestamp":"2025-09-12 17:23:48.807359798 +0000 UTC"}, Hostname:"ci-4426.1.0-a-9410d45923", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:23:49.002654 containerd[1875]: 2025-09-12 17:23:48.807 [INFO][5032] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:23:49.002654 containerd[1875]: 2025-09-12 17:23:48.838 [INFO][5032] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:23:49.002654 containerd[1875]: 2025-09-12 17:23:48.838 [INFO][5032] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4426.1.0-a-9410d45923' Sep 12 17:23:49.002654 containerd[1875]: 2025-09-12 17:23:48.916 [INFO][5032] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6bd134f8526edf8d4e2b66c640824a4afb822c46e31842563ffb9dc64eed8731" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:49.002654 containerd[1875]: 2025-09-12 17:23:48.922 [INFO][5032] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:49.002654 containerd[1875]: 2025-09-12 17:23:48.929 [INFO][5032] ipam/ipam.go 511: Trying affinity for 192.168.108.64/26 host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:49.002654 containerd[1875]: 2025-09-12 17:23:48.932 [INFO][5032] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.64/26 host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:49.002654 containerd[1875]: 2025-09-12 17:23:48.937 [INFO][5032] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.64/26 host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:49.002794 containerd[1875]: 2025-09-12 17:23:48.937 [INFO][5032] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.108.64/26 handle="k8s-pod-network.6bd134f8526edf8d4e2b66c640824a4afb822c46e31842563ffb9dc64eed8731" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:49.002794 containerd[1875]: 2025-09-12 17:23:48.938 [INFO][5032] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6bd134f8526edf8d4e2b66c640824a4afb822c46e31842563ffb9dc64eed8731 Sep 12 17:23:49.002794 containerd[1875]: 2025-09-12 17:23:48.948 [INFO][5032] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.108.64/26 handle="k8s-pod-network.6bd134f8526edf8d4e2b66c640824a4afb822c46e31842563ffb9dc64eed8731" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:49.002794 containerd[1875]: 2025-09-12 17:23:48.973 [INFO][5032] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.108.69/26] block=192.168.108.64/26 handle="k8s-pod-network.6bd134f8526edf8d4e2b66c640824a4afb822c46e31842563ffb9dc64eed8731" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:49.002794 containerd[1875]: 2025-09-12 17:23:48.973 [INFO][5032] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.69/26] handle="k8s-pod-network.6bd134f8526edf8d4e2b66c640824a4afb822c46e31842563ffb9dc64eed8731" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:49.002794 containerd[1875]: 2025-09-12 17:23:48.973 [INFO][5032] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:23:49.002794 containerd[1875]: 2025-09-12 17:23:48.973 [INFO][5032] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.108.69/26] IPv6=[] ContainerID="6bd134f8526edf8d4e2b66c640824a4afb822c46e31842563ffb9dc64eed8731" HandleID="k8s-pod-network.6bd134f8526edf8d4e2b66c640824a4afb822c46e31842563ffb9dc64eed8731" Workload="ci--4426.1.0--a--9410d45923-k8s-coredns--668d6bf9bc--rvwsb-eth0" Sep 12 17:23:49.002893 containerd[1875]: 2025-09-12 17:23:48.975 [INFO][5013] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6bd134f8526edf8d4e2b66c640824a4afb822c46e31842563ffb9dc64eed8731" Namespace="kube-system" Pod="coredns-668d6bf9bc-rvwsb" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-coredns--668d6bf9bc--rvwsb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--a--9410d45923-k8s-coredns--668d6bf9bc--rvwsb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4ba03ff9-6320-42fb-94ed-e6ab8f2858f4", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 23, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-a-9410d45923", ContainerID:"", Pod:"coredns-668d6bf9bc-rvwsb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.108.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie62acbdd21a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:23:49.002893 containerd[1875]: 2025-09-12 17:23:48.975 [INFO][5013] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.69/32] ContainerID="6bd134f8526edf8d4e2b66c640824a4afb822c46e31842563ffb9dc64eed8731" Namespace="kube-system" Pod="coredns-668d6bf9bc-rvwsb" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-coredns--668d6bf9bc--rvwsb-eth0" Sep 12 17:23:49.002893 containerd[1875]: 2025-09-12 17:23:48.975 [INFO][5013] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie62acbdd21a ContainerID="6bd134f8526edf8d4e2b66c640824a4afb822c46e31842563ffb9dc64eed8731" Namespace="kube-system" Pod="coredns-668d6bf9bc-rvwsb" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-coredns--668d6bf9bc--rvwsb-eth0" Sep 12 17:23:49.002893 containerd[1875]: 2025-09-12 17:23:48.980 [INFO][5013] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6bd134f8526edf8d4e2b66c640824a4afb822c46e31842563ffb9dc64eed8731" Namespace="kube-system" Pod="coredns-668d6bf9bc-rvwsb" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-coredns--668d6bf9bc--rvwsb-eth0" Sep 12 17:23:49.002893 containerd[1875]: 2025-09-12 17:23:48.984 [INFO][5013] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6bd134f8526edf8d4e2b66c640824a4afb822c46e31842563ffb9dc64eed8731" Namespace="kube-system" Pod="coredns-668d6bf9bc-rvwsb" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-coredns--668d6bf9bc--rvwsb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--a--9410d45923-k8s-coredns--668d6bf9bc--rvwsb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4ba03ff9-6320-42fb-94ed-e6ab8f2858f4", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 23, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-a-9410d45923", ContainerID:"6bd134f8526edf8d4e2b66c640824a4afb822c46e31842563ffb9dc64eed8731", Pod:"coredns-668d6bf9bc-rvwsb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.108.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie62acbdd21a", MAC:"1a:f1:6f:07:27:c4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:23:49.002893 containerd[1875]: 2025-09-12 17:23:48.999 [INFO][5013] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6bd134f8526edf8d4e2b66c640824a4afb822c46e31842563ffb9dc64eed8731" Namespace="kube-system" Pod="coredns-668d6bf9bc-rvwsb" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-coredns--668d6bf9bc--rvwsb-eth0" Sep 12 17:23:49.045608 containerd[1875]: time="2025-09-12T17:23:49.045527519Z" level=info msg="connecting to shim 6bd134f8526edf8d4e2b66c640824a4afb822c46e31842563ffb9dc64eed8731" address="unix:///run/containerd/s/4aa999e81a73700e1e8d03f5044a64acfb780853c2c2637cebe9ae8ceaedfc78" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:23:49.063609 systemd[1]: Started cri-containerd-6bd134f8526edf8d4e2b66c640824a4afb822c46e31842563ffb9dc64eed8731.scope - libcontainer container 6bd134f8526edf8d4e2b66c640824a4afb822c46e31842563ffb9dc64eed8731. Sep 12 17:23:49.092331 containerd[1875]: time="2025-09-12T17:23:49.092282593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rvwsb,Uid:4ba03ff9-6320-42fb-94ed-e6ab8f2858f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bd134f8526edf8d4e2b66c640824a4afb822c46e31842563ffb9dc64eed8731\"" Sep 12 17:23:49.094994 containerd[1875]: time="2025-09-12T17:23:49.094945865Z" level=info msg="CreateContainer within sandbox \"6bd134f8526edf8d4e2b66c640824a4afb822c46e31842563ffb9dc64eed8731\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:23:49.116940 containerd[1875]: time="2025-09-12T17:23:49.116904227Z" level=info msg="Container 22465ecaedbda8412fe5e6a291074f8ed49e6d801fa14daa19a6202fde25c83d: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:23:49.130305 containerd[1875]: time="2025-09-12T17:23:49.130270022Z" level=info msg="CreateContainer within sandbox \"6bd134f8526edf8d4e2b66c640824a4afb822c46e31842563ffb9dc64eed8731\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"22465ecaedbda8412fe5e6a291074f8ed49e6d801fa14daa19a6202fde25c83d\"" Sep 12 17:23:49.130860 containerd[1875]: time="2025-09-12T17:23:49.130828819Z" level=info msg="StartContainer for \"22465ecaedbda8412fe5e6a291074f8ed49e6d801fa14daa19a6202fde25c83d\"" Sep 12 17:23:49.131673 containerd[1875]: time="2025-09-12T17:23:49.131645352Z" level=info msg="connecting to shim 22465ecaedbda8412fe5e6a291074f8ed49e6d801fa14daa19a6202fde25c83d" address="unix:///run/containerd/s/4aa999e81a73700e1e8d03f5044a64acfb780853c2c2637cebe9ae8ceaedfc78" protocol=ttrpc version=3 Sep 12 17:23:49.148604 systemd[1]: Started cri-containerd-22465ecaedbda8412fe5e6a291074f8ed49e6d801fa14daa19a6202fde25c83d.scope - libcontainer container 22465ecaedbda8412fe5e6a291074f8ed49e6d801fa14daa19a6202fde25c83d. Sep 12 17:23:49.174279 containerd[1875]: time="2025-09-12T17:23:49.174244868Z" level=info msg="StartContainer for \"22465ecaedbda8412fe5e6a291074f8ed49e6d801fa14daa19a6202fde25c83d\" returns successfully" Sep 12 17:23:49.191667 systemd-networkd[1679]: cali2677b2502bb: Gained IPv6LL Sep 12 17:23:49.740512 containerd[1875]: time="2025-09-12T17:23:49.740473920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k8z9b,Uid:7aa40ba9-fdce-4e43-989e-6ae77304a9da,Namespace:kube-system,Attempt:0,}" Sep 12 17:23:49.740933 containerd[1875]: time="2025-09-12T17:23:49.740905007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vj2b6,Uid:91cf3b25-a4f8-46ef-a218-a6fd5f87b47a,Namespace:calico-system,Attempt:0,}" Sep 12 17:23:49.869991 systemd-networkd[1679]: cali11dbded3b2a: Link UP Sep 12 17:23:49.870556 systemd-networkd[1679]: cali11dbded3b2a: Gained carrier Sep 12 17:23:49.890365 containerd[1875]: 2025-09-12 17:23:49.787 [INFO][5182] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4426.1.0--a--9410d45923-k8s-coredns--668d6bf9bc--k8z9b-eth0 coredns-668d6bf9bc- kube-system 7aa40ba9-fdce-4e43-989e-6ae77304a9da 830 0 2025-09-12 17:23:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4426.1.0-a-9410d45923 coredns-668d6bf9bc-k8z9b eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali11dbded3b2a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c0239a6e24139e2dfa114e1fd2e24f3389db1f315c570bc69d105705894ad18c" Namespace="kube-system" Pod="coredns-668d6bf9bc-k8z9b" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-coredns--668d6bf9bc--k8z9b-" Sep 12 17:23:49.890365 containerd[1875]: 2025-09-12 17:23:49.787 [INFO][5182] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0239a6e24139e2dfa114e1fd2e24f3389db1f315c570bc69d105705894ad18c" Namespace="kube-system" Pod="coredns-668d6bf9bc-k8z9b" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-coredns--668d6bf9bc--k8z9b-eth0" Sep 12 17:23:49.890365 containerd[1875]: 2025-09-12 17:23:49.818 [INFO][5208] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0239a6e24139e2dfa114e1fd2e24f3389db1f315c570bc69d105705894ad18c" HandleID="k8s-pod-network.c0239a6e24139e2dfa114e1fd2e24f3389db1f315c570bc69d105705894ad18c" Workload="ci--4426.1.0--a--9410d45923-k8s-coredns--668d6bf9bc--k8z9b-eth0" Sep 12 17:23:49.890365 containerd[1875]: 2025-09-12 17:23:49.819 [INFO][5208] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c0239a6e24139e2dfa114e1fd2e24f3389db1f315c570bc69d105705894ad18c" HandleID="k8s-pod-network.c0239a6e24139e2dfa114e1fd2e24f3389db1f315c570bc69d105705894ad18c" Workload="ci--4426.1.0--a--9410d45923-k8s-coredns--668d6bf9bc--k8z9b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b230), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4426.1.0-a-9410d45923", "pod":"coredns-668d6bf9bc-k8z9b", "timestamp":"2025-09-12 17:23:49.818945108 +0000 UTC"}, Hostname:"ci-4426.1.0-a-9410d45923", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:23:49.890365 containerd[1875]: 2025-09-12 17:23:49.819 [INFO][5208] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:23:49.890365 containerd[1875]: 2025-09-12 17:23:49.819 [INFO][5208] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:23:49.890365 containerd[1875]: 2025-09-12 17:23:49.819 [INFO][5208] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4426.1.0-a-9410d45923' Sep 12 17:23:49.890365 containerd[1875]: 2025-09-12 17:23:49.825 [INFO][5208] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c0239a6e24139e2dfa114e1fd2e24f3389db1f315c570bc69d105705894ad18c" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:49.890365 containerd[1875]: 2025-09-12 17:23:49.829 [INFO][5208] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:49.890365 containerd[1875]: 2025-09-12 17:23:49.833 [INFO][5208] ipam/ipam.go 511: Trying affinity for 192.168.108.64/26 host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:49.890365 containerd[1875]: 2025-09-12 17:23:49.834 [INFO][5208] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.64/26 host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:49.890365 containerd[1875]: 2025-09-12 17:23:49.836 [INFO][5208] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.64/26 host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:49.890365 containerd[1875]: 2025-09-12 17:23:49.837 [INFO][5208] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.108.64/26 handle="k8s-pod-network.c0239a6e24139e2dfa114e1fd2e24f3389db1f315c570bc69d105705894ad18c" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:49.890365 containerd[1875]: 2025-09-12 17:23:49.838 [INFO][5208] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c0239a6e24139e2dfa114e1fd2e24f3389db1f315c570bc69d105705894ad18c Sep 12 17:23:49.890365 containerd[1875]: 2025-09-12 17:23:49.843 [INFO][5208] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.108.64/26 handle="k8s-pod-network.c0239a6e24139e2dfa114e1fd2e24f3389db1f315c570bc69d105705894ad18c" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:49.890365 containerd[1875]: 2025-09-12 17:23:49.855 [INFO][5208] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.108.70/26] block=192.168.108.64/26 handle="k8s-pod-network.c0239a6e24139e2dfa114e1fd2e24f3389db1f315c570bc69d105705894ad18c" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:49.890365 containerd[1875]: 2025-09-12 17:23:49.855 [INFO][5208] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.70/26] handle="k8s-pod-network.c0239a6e24139e2dfa114e1fd2e24f3389db1f315c570bc69d105705894ad18c" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:49.890365 containerd[1875]: 2025-09-12 17:23:49.855 [INFO][5208] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:23:49.890365 containerd[1875]: 2025-09-12 17:23:49.855 [INFO][5208] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.108.70/26] IPv6=[] ContainerID="c0239a6e24139e2dfa114e1fd2e24f3389db1f315c570bc69d105705894ad18c" HandleID="k8s-pod-network.c0239a6e24139e2dfa114e1fd2e24f3389db1f315c570bc69d105705894ad18c" Workload="ci--4426.1.0--a--9410d45923-k8s-coredns--668d6bf9bc--k8z9b-eth0" Sep 12 17:23:49.890870 containerd[1875]: 2025-09-12 17:23:49.864 [INFO][5182] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0239a6e24139e2dfa114e1fd2e24f3389db1f315c570bc69d105705894ad18c" Namespace="kube-system" Pod="coredns-668d6bf9bc-k8z9b" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-coredns--668d6bf9bc--k8z9b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--a--9410d45923-k8s-coredns--668d6bf9bc--k8z9b-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7aa40ba9-fdce-4e43-989e-6ae77304a9da", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 23, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-a-9410d45923", ContainerID:"", Pod:"coredns-668d6bf9bc-k8z9b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.108.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali11dbded3b2a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:23:49.890870 containerd[1875]: 2025-09-12 17:23:49.865 [INFO][5182] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.70/32] ContainerID="c0239a6e24139e2dfa114e1fd2e24f3389db1f315c570bc69d105705894ad18c" Namespace="kube-system" Pod="coredns-668d6bf9bc-k8z9b" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-coredns--668d6bf9bc--k8z9b-eth0" Sep 12 17:23:49.890870 containerd[1875]: 2025-09-12 17:23:49.865 [INFO][5182] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali11dbded3b2a ContainerID="c0239a6e24139e2dfa114e1fd2e24f3389db1f315c570bc69d105705894ad18c" Namespace="kube-system" Pod="coredns-668d6bf9bc-k8z9b" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-coredns--668d6bf9bc--k8z9b-eth0" Sep 12 17:23:49.890870 containerd[1875]: 2025-09-12 17:23:49.870 [INFO][5182] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0239a6e24139e2dfa114e1fd2e24f3389db1f315c570bc69d105705894ad18c" Namespace="kube-system" Pod="coredns-668d6bf9bc-k8z9b" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-coredns--668d6bf9bc--k8z9b-eth0" Sep 12 17:23:49.890870 containerd[1875]: 2025-09-12 17:23:49.871 [INFO][5182] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0239a6e24139e2dfa114e1fd2e24f3389db1f315c570bc69d105705894ad18c" Namespace="kube-system" Pod="coredns-668d6bf9bc-k8z9b" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-coredns--668d6bf9bc--k8z9b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--a--9410d45923-k8s-coredns--668d6bf9bc--k8z9b-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7aa40ba9-fdce-4e43-989e-6ae77304a9da", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 23, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-a-9410d45923", ContainerID:"c0239a6e24139e2dfa114e1fd2e24f3389db1f315c570bc69d105705894ad18c", Pod:"coredns-668d6bf9bc-k8z9b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.108.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali11dbded3b2a", MAC:"96:e6:b0:e7:b7:61", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:23:49.890870 containerd[1875]: 2025-09-12 17:23:49.885 [INFO][5182] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0239a6e24139e2dfa114e1fd2e24f3389db1f315c570bc69d105705894ad18c" Namespace="kube-system" Pod="coredns-668d6bf9bc-k8z9b" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-coredns--668d6bf9bc--k8z9b-eth0" Sep 12 17:23:49.906726 kubelet[3398]: I0912 17:23:49.905411 3398 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rvwsb" podStartSLOduration=40.905395361 podStartE2EDuration="40.905395361s" podCreationTimestamp="2025-09-12 17:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:23:49.904551771 +0000 UTC m=+46.325469619" watchObservedRunningTime="2025-09-12 17:23:49.905395361 +0000 UTC m=+46.326313209" Sep 12 17:23:49.946788 containerd[1875]: time="2025-09-12T17:23:49.946749096Z" level=info msg="connecting to shim c0239a6e24139e2dfa114e1fd2e24f3389db1f315c570bc69d105705894ad18c" address="unix:///run/containerd/s/a22222e37b9ff34b7f664d892d34c3796046008ec9e7ca0e79ffa48e91305127" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:23:49.960920 systemd-networkd[1679]: cali0095a9bfb48: Gained IPv6LL Sep 12 17:23:49.982767 systemd[1]: Started cri-containerd-c0239a6e24139e2dfa114e1fd2e24f3389db1f315c570bc69d105705894ad18c.scope - libcontainer container c0239a6e24139e2dfa114e1fd2e24f3389db1f315c570bc69d105705894ad18c. Sep 12 17:23:49.996991 systemd-networkd[1679]: cali9099001a9c6: Link UP Sep 12 17:23:49.999961 systemd-networkd[1679]: cali9099001a9c6: Gained carrier Sep 12 17:23:50.017817 containerd[1875]: 2025-09-12 17:23:49.801 [INFO][5194] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4426.1.0--a--9410d45923-k8s-csi--node--driver--vj2b6-eth0 csi-node-driver- calico-system 91cf3b25-a4f8-46ef-a218-a6fd5f87b47a 693 0 2025-09-12 17:23:22 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4426.1.0-a-9410d45923 csi-node-driver-vj2b6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9099001a9c6 [] [] }} ContainerID="602b8ba76a76d89d71afb51f8ddaf30605030e8f0a73a6b371e1f608c0b668fd" Namespace="calico-system" Pod="csi-node-driver-vj2b6" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-csi--node--driver--vj2b6-" Sep 12 17:23:50.017817 containerd[1875]: 2025-09-12 17:23:49.801 [INFO][5194] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="602b8ba76a76d89d71afb51f8ddaf30605030e8f0a73a6b371e1f608c0b668fd" Namespace="calico-system" Pod="csi-node-driver-vj2b6" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-csi--node--driver--vj2b6-eth0" Sep 12 17:23:50.017817 containerd[1875]: 2025-09-12 17:23:49.822 [INFO][5214] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="602b8ba76a76d89d71afb51f8ddaf30605030e8f0a73a6b371e1f608c0b668fd" HandleID="k8s-pod-network.602b8ba76a76d89d71afb51f8ddaf30605030e8f0a73a6b371e1f608c0b668fd" Workload="ci--4426.1.0--a--9410d45923-k8s-csi--node--driver--vj2b6-eth0" Sep 12 17:23:50.017817 containerd[1875]: 2025-09-12 17:23:49.822 [INFO][5214] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="602b8ba76a76d89d71afb51f8ddaf30605030e8f0a73a6b371e1f608c0b668fd" HandleID="k8s-pod-network.602b8ba76a76d89d71afb51f8ddaf30605030e8f0a73a6b371e1f608c0b668fd" Workload="ci--4426.1.0--a--9410d45923-k8s-csi--node--driver--vj2b6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b610), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4426.1.0-a-9410d45923", "pod":"csi-node-driver-vj2b6", "timestamp":"2025-09-12 17:23:49.822596592 +0000 UTC"}, Hostname:"ci-4426.1.0-a-9410d45923", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:23:50.017817 containerd[1875]: 2025-09-12 17:23:49.822 [INFO][5214] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:23:50.017817 containerd[1875]: 2025-09-12 17:23:49.856 [INFO][5214] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:23:50.017817 containerd[1875]: 2025-09-12 17:23:49.856 [INFO][5214] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4426.1.0-a-9410d45923' Sep 12 17:23:50.017817 containerd[1875]: 2025-09-12 17:23:49.926 [INFO][5214] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.602b8ba76a76d89d71afb51f8ddaf30605030e8f0a73a6b371e1f608c0b668fd" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:50.017817 containerd[1875]: 2025-09-12 17:23:49.943 [INFO][5214] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:50.017817 containerd[1875]: 2025-09-12 17:23:49.958 [INFO][5214] ipam/ipam.go 511: Trying affinity for 192.168.108.64/26 host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:50.017817 containerd[1875]: 2025-09-12 17:23:49.962 [INFO][5214] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.64/26 host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:50.017817 containerd[1875]: 2025-09-12 17:23:49.965 [INFO][5214] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.64/26 host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:50.017817 containerd[1875]: 2025-09-12 17:23:49.965 [INFO][5214] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.108.64/26 handle="k8s-pod-network.602b8ba76a76d89d71afb51f8ddaf30605030e8f0a73a6b371e1f608c0b668fd" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:50.017817 containerd[1875]: 2025-09-12 17:23:49.966 [INFO][5214] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.602b8ba76a76d89d71afb51f8ddaf30605030e8f0a73a6b371e1f608c0b668fd Sep 12 17:23:50.017817 containerd[1875]: 2025-09-12 17:23:49.976 [INFO][5214] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.108.64/26 handle="k8s-pod-network.602b8ba76a76d89d71afb51f8ddaf30605030e8f0a73a6b371e1f608c0b668fd" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:50.017817 containerd[1875]: 2025-09-12 17:23:49.987 [INFO][5214] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.108.71/26] block=192.168.108.64/26 handle="k8s-pod-network.602b8ba76a76d89d71afb51f8ddaf30605030e8f0a73a6b371e1f608c0b668fd" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:50.017817 containerd[1875]: 2025-09-12 17:23:49.988 [INFO][5214] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.71/26] handle="k8s-pod-network.602b8ba76a76d89d71afb51f8ddaf30605030e8f0a73a6b371e1f608c0b668fd" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:50.017817 containerd[1875]: 2025-09-12 17:23:49.988 [INFO][5214] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:23:50.017817 containerd[1875]: 2025-09-12 17:23:49.988 [INFO][5214] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.108.71/26] IPv6=[] ContainerID="602b8ba76a76d89d71afb51f8ddaf30605030e8f0a73a6b371e1f608c0b668fd" HandleID="k8s-pod-network.602b8ba76a76d89d71afb51f8ddaf30605030e8f0a73a6b371e1f608c0b668fd" Workload="ci--4426.1.0--a--9410d45923-k8s-csi--node--driver--vj2b6-eth0" Sep 12 17:23:50.018216 containerd[1875]: 2025-09-12 17:23:49.991 [INFO][5194] cni-plugin/k8s.go 418: Populated endpoint ContainerID="602b8ba76a76d89d71afb51f8ddaf30605030e8f0a73a6b371e1f608c0b668fd" Namespace="calico-system" Pod="csi-node-driver-vj2b6" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-csi--node--driver--vj2b6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--a--9410d45923-k8s-csi--node--driver--vj2b6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"91cf3b25-a4f8-46ef-a218-a6fd5f87b47a", ResourceVersion:"693", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 23, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-a-9410d45923", ContainerID:"", Pod:"csi-node-driver-vj2b6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.108.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9099001a9c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:23:50.018216 containerd[1875]: 2025-09-12 17:23:49.992 [INFO][5194] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.71/32] ContainerID="602b8ba76a76d89d71afb51f8ddaf30605030e8f0a73a6b371e1f608c0b668fd" Namespace="calico-system" Pod="csi-node-driver-vj2b6" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-csi--node--driver--vj2b6-eth0" Sep 12 17:23:50.018216 containerd[1875]: 2025-09-12 17:23:49.992 [INFO][5194] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9099001a9c6 ContainerID="602b8ba76a76d89d71afb51f8ddaf30605030e8f0a73a6b371e1f608c0b668fd" Namespace="calico-system" Pod="csi-node-driver-vj2b6" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-csi--node--driver--vj2b6-eth0" Sep 12 17:23:50.018216 containerd[1875]: 2025-09-12 17:23:49.999 [INFO][5194] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="602b8ba76a76d89d71afb51f8ddaf30605030e8f0a73a6b371e1f608c0b668fd" Namespace="calico-system" Pod="csi-node-driver-vj2b6" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-csi--node--driver--vj2b6-eth0" Sep 12 17:23:50.018216 containerd[1875]: 2025-09-12 17:23:50.001 [INFO][5194] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="602b8ba76a76d89d71afb51f8ddaf30605030e8f0a73a6b371e1f608c0b668fd" Namespace="calico-system" Pod="csi-node-driver-vj2b6" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-csi--node--driver--vj2b6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--a--9410d45923-k8s-csi--node--driver--vj2b6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"91cf3b25-a4f8-46ef-a218-a6fd5f87b47a", ResourceVersion:"693", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 23, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-a-9410d45923", ContainerID:"602b8ba76a76d89d71afb51f8ddaf30605030e8f0a73a6b371e1f608c0b668fd", Pod:"csi-node-driver-vj2b6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.108.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9099001a9c6", MAC:"a2:83:64:43:4c:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:23:50.018216 containerd[1875]: 2025-09-12 17:23:50.014 [INFO][5194] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="602b8ba76a76d89d71afb51f8ddaf30605030e8f0a73a6b371e1f608c0b668fd" Namespace="calico-system" Pod="csi-node-driver-vj2b6" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-csi--node--driver--vj2b6-eth0" Sep 12 17:23:50.057484 containerd[1875]: time="2025-09-12T17:23:50.057431625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k8z9b,Uid:7aa40ba9-fdce-4e43-989e-6ae77304a9da,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0239a6e24139e2dfa114e1fd2e24f3389db1f315c570bc69d105705894ad18c\"" Sep 12 17:23:50.061642 containerd[1875]: time="2025-09-12T17:23:50.061608736Z" level=info msg="CreateContainer within sandbox \"c0239a6e24139e2dfa114e1fd2e24f3389db1f315c570bc69d105705894ad18c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:23:50.072247 containerd[1875]: time="2025-09-12T17:23:50.072163677Z" level=info msg="connecting to shim 602b8ba76a76d89d71afb51f8ddaf30605030e8f0a73a6b371e1f608c0b668fd" address="unix:///run/containerd/s/41602551f11551526470aa221b1e0c0b6d799b4b48161dcf366acce85b39a21f" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:23:50.084141 containerd[1875]: time="2025-09-12T17:23:50.084114797Z" level=info msg="Container dc27442aad6ab92ccd4e13f16340c52f8da3aca52b303e414bed2ec096496586: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:23:50.097671 systemd[1]: Started cri-containerd-602b8ba76a76d89d71afb51f8ddaf30605030e8f0a73a6b371e1f608c0b668fd.scope - libcontainer container 602b8ba76a76d89d71afb51f8ddaf30605030e8f0a73a6b371e1f608c0b668fd. Sep 12 17:23:50.099226 containerd[1875]: time="2025-09-12T17:23:50.099106611Z" level=info msg="CreateContainer within sandbox \"c0239a6e24139e2dfa114e1fd2e24f3389db1f315c570bc69d105705894ad18c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dc27442aad6ab92ccd4e13f16340c52f8da3aca52b303e414bed2ec096496586\"" Sep 12 17:23:50.100117 containerd[1875]: time="2025-09-12T17:23:50.100090687Z" level=info msg="StartContainer for \"dc27442aad6ab92ccd4e13f16340c52f8da3aca52b303e414bed2ec096496586\"" Sep 12 17:23:50.102134 containerd[1875]: time="2025-09-12T17:23:50.101306267Z" level=info msg="connecting to shim dc27442aad6ab92ccd4e13f16340c52f8da3aca52b303e414bed2ec096496586" address="unix:///run/containerd/s/a22222e37b9ff34b7f664d892d34c3796046008ec9e7ca0e79ffa48e91305127" protocol=ttrpc version=3 Sep 12 17:23:50.124767 systemd[1]: Started cri-containerd-dc27442aad6ab92ccd4e13f16340c52f8da3aca52b303e414bed2ec096496586.scope - libcontainer container dc27442aad6ab92ccd4e13f16340c52f8da3aca52b303e414bed2ec096496586. Sep 12 17:23:50.138200 containerd[1875]: time="2025-09-12T17:23:50.138034698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vj2b6,Uid:91cf3b25-a4f8-46ef-a218-a6fd5f87b47a,Namespace:calico-system,Attempt:0,} returns sandbox id \"602b8ba76a76d89d71afb51f8ddaf30605030e8f0a73a6b371e1f608c0b668fd\"" Sep 12 17:23:50.158459 containerd[1875]: time="2025-09-12T17:23:50.158436692Z" level=info msg="StartContainer for \"dc27442aad6ab92ccd4e13f16340c52f8da3aca52b303e414bed2ec096496586\" returns successfully" Sep 12 17:23:50.536603 systemd-networkd[1679]: calie62acbdd21a: Gained IPv6LL Sep 12 17:23:50.739520 containerd[1875]: time="2025-09-12T17:23:50.739440718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ff6f6f7b7-m8ldc,Uid:73929bc7-4919-4b77-ba79-6b869c871156,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:23:50.884052 systemd-networkd[1679]: calidfff3640bce: Link UP Sep 12 17:23:50.886525 systemd-networkd[1679]: calidfff3640bce: Gained carrier Sep 12 17:23:50.906874 containerd[1875]: 2025-09-12 17:23:50.796 [INFO][5376] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4426.1.0--a--9410d45923-k8s-calico--apiserver--7ff6f6f7b7--m8ldc-eth0 calico-apiserver-7ff6f6f7b7- calico-apiserver 73929bc7-4919-4b77-ba79-6b869c871156 828 0 2025-09-12 17:23:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7ff6f6f7b7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4426.1.0-a-9410d45923 calico-apiserver-7ff6f6f7b7-m8ldc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidfff3640bce [] [] }} ContainerID="a20a299416109be36c2289696b7a2a7acdd8703e7f7162a9af8918c82b9e016d" Namespace="calico-apiserver" Pod="calico-apiserver-7ff6f6f7b7-m8ldc" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-calico--apiserver--7ff6f6f7b7--m8ldc-" Sep 12 17:23:50.906874 containerd[1875]: 2025-09-12 17:23:50.797 [INFO][5376] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a20a299416109be36c2289696b7a2a7acdd8703e7f7162a9af8918c82b9e016d" Namespace="calico-apiserver" Pod="calico-apiserver-7ff6f6f7b7-m8ldc" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-calico--apiserver--7ff6f6f7b7--m8ldc-eth0" Sep 12 17:23:50.906874 containerd[1875]: 2025-09-12 17:23:50.822 [INFO][5388] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a20a299416109be36c2289696b7a2a7acdd8703e7f7162a9af8918c82b9e016d" HandleID="k8s-pod-network.a20a299416109be36c2289696b7a2a7acdd8703e7f7162a9af8918c82b9e016d" Workload="ci--4426.1.0--a--9410d45923-k8s-calico--apiserver--7ff6f6f7b7--m8ldc-eth0" Sep 12 17:23:50.906874 containerd[1875]: 2025-09-12 17:23:50.822 [INFO][5388] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a20a299416109be36c2289696b7a2a7acdd8703e7f7162a9af8918c82b9e016d" HandleID="k8s-pod-network.a20a299416109be36c2289696b7a2a7acdd8703e7f7162a9af8918c82b9e016d" Workload="ci--4426.1.0--a--9410d45923-k8s-calico--apiserver--7ff6f6f7b7--m8ldc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b890), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4426.1.0-a-9410d45923", "pod":"calico-apiserver-7ff6f6f7b7-m8ldc", "timestamp":"2025-09-12 17:23:50.822211013 +0000 UTC"}, Hostname:"ci-4426.1.0-a-9410d45923", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:23:50.906874 containerd[1875]: 2025-09-12 17:23:50.822 [INFO][5388] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:23:50.906874 containerd[1875]: 2025-09-12 17:23:50.822 [INFO][5388] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:23:50.906874 containerd[1875]: 2025-09-12 17:23:50.822 [INFO][5388] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4426.1.0-a-9410d45923' Sep 12 17:23:50.906874 containerd[1875]: 2025-09-12 17:23:50.831 [INFO][5388] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a20a299416109be36c2289696b7a2a7acdd8703e7f7162a9af8918c82b9e016d" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:50.906874 containerd[1875]: 2025-09-12 17:23:50.837 [INFO][5388] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:50.906874 containerd[1875]: 2025-09-12 17:23:50.842 [INFO][5388] ipam/ipam.go 511: Trying affinity for 192.168.108.64/26 host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:50.906874 containerd[1875]: 2025-09-12 17:23:50.846 [INFO][5388] ipam/ipam.go 158: Attempting to load block cidr=192.168.108.64/26 host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:50.906874 containerd[1875]: 2025-09-12 17:23:50.849 [INFO][5388] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.108.64/26 host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:50.906874 containerd[1875]: 2025-09-12 17:23:50.849 [INFO][5388] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.108.64/26 handle="k8s-pod-network.a20a299416109be36c2289696b7a2a7acdd8703e7f7162a9af8918c82b9e016d" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:50.906874 containerd[1875]: 2025-09-12 17:23:50.852 [INFO][5388] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a20a299416109be36c2289696b7a2a7acdd8703e7f7162a9af8918c82b9e016d Sep 12 17:23:50.906874 containerd[1875]: 2025-09-12 17:23:50.860 [INFO][5388] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.108.64/26 handle="k8s-pod-network.a20a299416109be36c2289696b7a2a7acdd8703e7f7162a9af8918c82b9e016d" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:50.906874 containerd[1875]: 2025-09-12 17:23:50.872 [INFO][5388] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.108.72/26] block=192.168.108.64/26 handle="k8s-pod-network.a20a299416109be36c2289696b7a2a7acdd8703e7f7162a9af8918c82b9e016d" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:50.906874 containerd[1875]: 2025-09-12 17:23:50.873 [INFO][5388] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.108.72/26] handle="k8s-pod-network.a20a299416109be36c2289696b7a2a7acdd8703e7f7162a9af8918c82b9e016d" host="ci-4426.1.0-a-9410d45923" Sep 12 17:23:50.906874 containerd[1875]: 2025-09-12 17:23:50.873 [INFO][5388] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:23:50.906874 containerd[1875]: 2025-09-12 17:23:50.873 [INFO][5388] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.108.72/26] IPv6=[] ContainerID="a20a299416109be36c2289696b7a2a7acdd8703e7f7162a9af8918c82b9e016d" HandleID="k8s-pod-network.a20a299416109be36c2289696b7a2a7acdd8703e7f7162a9af8918c82b9e016d" Workload="ci--4426.1.0--a--9410d45923-k8s-calico--apiserver--7ff6f6f7b7--m8ldc-eth0" Sep 12 17:23:50.908272 containerd[1875]: 2025-09-12 17:23:50.875 [INFO][5376] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a20a299416109be36c2289696b7a2a7acdd8703e7f7162a9af8918c82b9e016d" Namespace="calico-apiserver" Pod="calico-apiserver-7ff6f6f7b7-m8ldc" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-calico--apiserver--7ff6f6f7b7--m8ldc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--a--9410d45923-k8s-calico--apiserver--7ff6f6f7b7--m8ldc-eth0", GenerateName:"calico-apiserver-7ff6f6f7b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"73929bc7-4919-4b77-ba79-6b869c871156", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 23, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ff6f6f7b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-a-9410d45923", ContainerID:"", Pod:"calico-apiserver-7ff6f6f7b7-m8ldc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidfff3640bce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:23:50.908272 containerd[1875]: 2025-09-12 17:23:50.876 [INFO][5376] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.72/32] ContainerID="a20a299416109be36c2289696b7a2a7acdd8703e7f7162a9af8918c82b9e016d" Namespace="calico-apiserver" Pod="calico-apiserver-7ff6f6f7b7-m8ldc" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-calico--apiserver--7ff6f6f7b7--m8ldc-eth0" Sep 12 17:23:50.908272 containerd[1875]: 2025-09-12 17:23:50.876 [INFO][5376] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidfff3640bce ContainerID="a20a299416109be36c2289696b7a2a7acdd8703e7f7162a9af8918c82b9e016d" Namespace="calico-apiserver" Pod="calico-apiserver-7ff6f6f7b7-m8ldc" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-calico--apiserver--7ff6f6f7b7--m8ldc-eth0" Sep 12 17:23:50.908272 containerd[1875]: 2025-09-12 17:23:50.887 [INFO][5376] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a20a299416109be36c2289696b7a2a7acdd8703e7f7162a9af8918c82b9e016d" Namespace="calico-apiserver" Pod="calico-apiserver-7ff6f6f7b7-m8ldc" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-calico--apiserver--7ff6f6f7b7--m8ldc-eth0" Sep 12 17:23:50.908272 containerd[1875]: 2025-09-12 17:23:50.887 [INFO][5376] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a20a299416109be36c2289696b7a2a7acdd8703e7f7162a9af8918c82b9e016d" Namespace="calico-apiserver" Pod="calico-apiserver-7ff6f6f7b7-m8ldc" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-calico--apiserver--7ff6f6f7b7--m8ldc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4426.1.0--a--9410d45923-k8s-calico--apiserver--7ff6f6f7b7--m8ldc-eth0", GenerateName:"calico-apiserver-7ff6f6f7b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"73929bc7-4919-4b77-ba79-6b869c871156", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 23, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ff6f6f7b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4426.1.0-a-9410d45923", ContainerID:"a20a299416109be36c2289696b7a2a7acdd8703e7f7162a9af8918c82b9e016d", Pod:"calico-apiserver-7ff6f6f7b7-m8ldc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidfff3640bce", MAC:"0a:e9:d9:0d:14:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:23:50.908272 containerd[1875]: 2025-09-12 17:23:50.903 [INFO][5376] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a20a299416109be36c2289696b7a2a7acdd8703e7f7162a9af8918c82b9e016d" Namespace="calico-apiserver" Pod="calico-apiserver-7ff6f6f7b7-m8ldc" WorkloadEndpoint="ci--4426.1.0--a--9410d45923-k8s-calico--apiserver--7ff6f6f7b7--m8ldc-eth0" Sep 12 17:23:50.964426 containerd[1875]: time="2025-09-12T17:23:50.964347671Z" level=info msg="connecting to shim a20a299416109be36c2289696b7a2a7acdd8703e7f7162a9af8918c82b9e016d" address="unix:///run/containerd/s/c52a2a22bbb356bbae451fc432294f1198c87fd28f350ac54fd3efbe8c6ffdee" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:23:50.976708 kubelet[3398]: I0912 17:23:50.975079 3398 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-k8z9b" podStartSLOduration=41.975060451 podStartE2EDuration="41.975060451s" podCreationTimestamp="2025-09-12 17:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:23:50.917718202 +0000 UTC m=+47.338636082" watchObservedRunningTime="2025-09-12 17:23:50.975060451 +0000 UTC m=+47.395978291" Sep 12 17:23:51.004774 systemd[1]: Started cri-containerd-a20a299416109be36c2289696b7a2a7acdd8703e7f7162a9af8918c82b9e016d.scope - libcontainer container a20a299416109be36c2289696b7a2a7acdd8703e7f7162a9af8918c82b9e016d. Sep 12 17:23:51.069570 containerd[1875]: time="2025-09-12T17:23:51.069528537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ff6f6f7b7-m8ldc,Uid:73929bc7-4919-4b77-ba79-6b869c871156,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a20a299416109be36c2289696b7a2a7acdd8703e7f7162a9af8918c82b9e016d\"" Sep 12 17:23:51.239655 systemd-networkd[1679]: cali9099001a9c6: Gained IPv6LL Sep 12 17:23:51.602118 containerd[1875]: time="2025-09-12T17:23:51.602067299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:51.604913 containerd[1875]: time="2025-09-12T17:23:51.604512212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 12 17:23:51.607966 containerd[1875]: time="2025-09-12T17:23:51.607701111Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:51.612602 containerd[1875]: time="2025-09-12T17:23:51.612232675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:51.612787 containerd[1875]: time="2025-09-12T17:23:51.612751414Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 3.256567292s" Sep 12 17:23:51.612930 containerd[1875]: time="2025-09-12T17:23:51.612860978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 12 17:23:51.615523 containerd[1875]: time="2025-09-12T17:23:51.614854930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:23:51.616896 containerd[1875]: time="2025-09-12T17:23:51.616772471Z" level=info msg="CreateContainer within sandbox \"64dda442d6bfda75bb2a15e3ee61f2723a6619761bdc27b100a219a42ef0f06a\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:23:51.637335 containerd[1875]: time="2025-09-12T17:23:51.637299349Z" level=info msg="Container 703c6f9e362392092294f983a9984718b68dac0979eef70de8148d49db9f0f0c: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:23:51.648093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount360313812.mount: Deactivated successfully. Sep 12 17:23:51.655751 containerd[1875]: time="2025-09-12T17:23:51.655709447Z" level=info msg="CreateContainer within sandbox \"64dda442d6bfda75bb2a15e3ee61f2723a6619761bdc27b100a219a42ef0f06a\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"703c6f9e362392092294f983a9984718b68dac0979eef70de8148d49db9f0f0c\"" Sep 12 17:23:51.656767 containerd[1875]: time="2025-09-12T17:23:51.656745052Z" level=info msg="StartContainer for \"703c6f9e362392092294f983a9984718b68dac0979eef70de8148d49db9f0f0c\"" Sep 12 17:23:51.658203 containerd[1875]: time="2025-09-12T17:23:51.658169527Z" level=info msg="connecting to shim 703c6f9e362392092294f983a9984718b68dac0979eef70de8148d49db9f0f0c" address="unix:///run/containerd/s/bc7b11977a2b683de997165b33a8048876665399d4e69c1e5c394c645dcf74d6" protocol=ttrpc version=3 Sep 12 17:23:51.681898 systemd[1]: Started cri-containerd-703c6f9e362392092294f983a9984718b68dac0979eef70de8148d49db9f0f0c.scope - libcontainer container 703c6f9e362392092294f983a9984718b68dac0979eef70de8148d49db9f0f0c. Sep 12 17:23:51.717704 containerd[1875]: time="2025-09-12T17:23:51.717650717Z" level=info msg="StartContainer for \"703c6f9e362392092294f983a9984718b68dac0979eef70de8148d49db9f0f0c\" returns successfully" Sep 12 17:23:51.815702 systemd-networkd[1679]: cali11dbded3b2a: Gained IPv6LL Sep 12 17:23:51.926502 kubelet[3398]: I0912 17:23:51.926124 3398 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-qzvb7" podStartSLOduration=23.269706044 podStartE2EDuration="28.926110653s" podCreationTimestamp="2025-09-12 17:23:23 +0000 UTC" firstStartedPulling="2025-09-12 17:23:45.957520695 +0000 UTC m=+42.378438535" lastFinishedPulling="2025-09-12 17:23:51.613925296 +0000 UTC m=+48.034843144" observedRunningTime="2025-09-12 17:23:51.925773737 +0000 UTC m=+48.346691585" watchObservedRunningTime="2025-09-12 17:23:51.926110653 +0000 UTC m=+48.347028493" Sep 12 17:23:51.967127 containerd[1875]: time="2025-09-12T17:23:51.967092950Z" level=info msg="TaskExit event in podsandbox handler container_id:\"703c6f9e362392092294f983a9984718b68dac0979eef70de8148d49db9f0f0c\" id:\"1d75624cf2809b77db024c8f4ef849ff4e17fb9c909d002845c2fcade88eaead\" pid:5514 exit_status:1 exited_at:{seconds:1757697831 nanos:966783099}" Sep 12 17:23:52.327612 systemd-networkd[1679]: calidfff3640bce: Gained IPv6LL Sep 12 17:23:52.981405 containerd[1875]: time="2025-09-12T17:23:52.981357414Z" level=info msg="TaskExit event in podsandbox handler container_id:\"703c6f9e362392092294f983a9984718b68dac0979eef70de8148d49db9f0f0c\" id:\"f3d9500c308c7a7216f8d382b672f1439e3d63f583a0541ab9b38a1dc46a36d4\" pid:5539 exit_status:1 exited_at:{seconds:1757697832 nanos:980494028}" Sep 12 17:23:53.981238 containerd[1875]: time="2025-09-12T17:23:53.981154679Z" level=info msg="TaskExit event in podsandbox handler container_id:\"703c6f9e362392092294f983a9984718b68dac0979eef70de8148d49db9f0f0c\" id:\"13c8f16a8922d3fd60bea3a716181d53b563b214e2fbb0ea4369f2bf18d8c08e\" pid:5565 exit_status:1 exited_at:{seconds:1757697833 nanos:980730610}" Sep 12 17:23:54.837514 containerd[1875]: time="2025-09-12T17:23:54.837445819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:54.840232 containerd[1875]: time="2025-09-12T17:23:54.840082402Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 12 17:23:54.844746 containerd[1875]: time="2025-09-12T17:23:54.844706860Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:54.851054 containerd[1875]: time="2025-09-12T17:23:54.851000839Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:54.851354 containerd[1875]: time="2025-09-12T17:23:54.851328097Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 3.23644799s" Sep 12 17:23:54.851401 containerd[1875]: time="2025-09-12T17:23:54.851358041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 17:23:54.853825 containerd[1875]: time="2025-09-12T17:23:54.853005770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:23:54.853825 containerd[1875]: time="2025-09-12T17:23:54.853794210Z" level=info msg="CreateContainer within sandbox \"b1bb38c6e7e9e81df6f5816dd18c19db678d693ece47d6159e5cba577c49d0d2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:23:54.875400 containerd[1875]: time="2025-09-12T17:23:54.875339979Z" level=info msg="Container 558481b3a31c6c651d2e7942772b2c114b9ac67204c9bd7f8a11eb1aab48fd7c: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:23:54.898250 containerd[1875]: time="2025-09-12T17:23:54.898208035Z" level=info msg="CreateContainer within sandbox \"b1bb38c6e7e9e81df6f5816dd18c19db678d693ece47d6159e5cba577c49d0d2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"558481b3a31c6c651d2e7942772b2c114b9ac67204c9bd7f8a11eb1aab48fd7c\"" Sep 12 17:23:54.898629 containerd[1875]: time="2025-09-12T17:23:54.898614279Z" level=info msg="StartContainer for \"558481b3a31c6c651d2e7942772b2c114b9ac67204c9bd7f8a11eb1aab48fd7c\"" Sep 12 17:23:54.900250 containerd[1875]: time="2025-09-12T17:23:54.900220783Z" level=info msg="connecting to shim 558481b3a31c6c651d2e7942772b2c114b9ac67204c9bd7f8a11eb1aab48fd7c" address="unix:///run/containerd/s/e3fb56f280a210fcc727f228f89cf69bcdf4a63f67c208c1ab1cc96764b76236" protocol=ttrpc version=3 Sep 12 17:23:54.916621 systemd[1]: Started cri-containerd-558481b3a31c6c651d2e7942772b2c114b9ac67204c9bd7f8a11eb1aab48fd7c.scope - libcontainer container 558481b3a31c6c651d2e7942772b2c114b9ac67204c9bd7f8a11eb1aab48fd7c. Sep 12 17:23:54.964913 containerd[1875]: time="2025-09-12T17:23:54.964881811Z" level=info msg="StartContainer for \"558481b3a31c6c651d2e7942772b2c114b9ac67204c9bd7f8a11eb1aab48fd7c\" returns successfully" Sep 12 17:23:56.924735 kubelet[3398]: I0912 17:23:56.924697 3398 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:23:59.514387 containerd[1875]: time="2025-09-12T17:23:59.514334788Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:59.516850 containerd[1875]: time="2025-09-12T17:23:59.516814770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 12 17:23:59.519437 containerd[1875]: time="2025-09-12T17:23:59.519398203Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:59.523942 containerd[1875]: time="2025-09-12T17:23:59.523905700Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:59.524447 containerd[1875]: time="2025-09-12T17:23:59.524159149Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 4.671127666s" Sep 12 17:23:59.524447 containerd[1875]: time="2025-09-12T17:23:59.524185622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 12 17:23:59.525163 containerd[1875]: time="2025-09-12T17:23:59.525142186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:23:59.540663 containerd[1875]: time="2025-09-12T17:23:59.540640056Z" level=info msg="CreateContainer within sandbox \"9f671319fa3a441b31f5a82800657b8b1095999392ec4e3adff80a035690cf3d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:23:59.564027 containerd[1875]: time="2025-09-12T17:23:59.564001189Z" level=info msg="Container f65827dbdfd6c84dd8f50f42a12afd80f589706e9aaf598113fe0537345e8fad: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:23:59.579973 containerd[1875]: time="2025-09-12T17:23:59.579941068Z" level=info msg="CreateContainer within sandbox \"9f671319fa3a441b31f5a82800657b8b1095999392ec4e3adff80a035690cf3d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f65827dbdfd6c84dd8f50f42a12afd80f589706e9aaf598113fe0537345e8fad\"" Sep 12 17:23:59.582981 containerd[1875]: time="2025-09-12T17:23:59.582902043Z" level=info msg="StartContainer for \"f65827dbdfd6c84dd8f50f42a12afd80f589706e9aaf598113fe0537345e8fad\"" Sep 12 17:23:59.583916 containerd[1875]: time="2025-09-12T17:23:59.583870551Z" level=info msg="connecting to shim f65827dbdfd6c84dd8f50f42a12afd80f589706e9aaf598113fe0537345e8fad" address="unix:///run/containerd/s/cc07e4b889b9ceb59375fc3fea6d05a0e03442ffae66b897e6d3dcbca53341c0" protocol=ttrpc version=3 Sep 12 17:23:59.601586 systemd[1]: Started cri-containerd-f65827dbdfd6c84dd8f50f42a12afd80f589706e9aaf598113fe0537345e8fad.scope - libcontainer container f65827dbdfd6c84dd8f50f42a12afd80f589706e9aaf598113fe0537345e8fad. Sep 12 17:23:59.634625 containerd[1875]: time="2025-09-12T17:23:59.634565934Z" level=info msg="StartContainer for \"f65827dbdfd6c84dd8f50f42a12afd80f589706e9aaf598113fe0537345e8fad\" returns successfully" Sep 12 17:23:59.950125 kubelet[3398]: I0912 17:23:59.949933 3398 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7ff6f6f7b7-fffcq" podStartSLOduration=34.124330522 podStartE2EDuration="40.949918125s" podCreationTimestamp="2025-09-12 17:23:19 +0000 UTC" firstStartedPulling="2025-09-12 17:23:48.026617152 +0000 UTC m=+44.447534992" lastFinishedPulling="2025-09-12 17:23:54.852204747 +0000 UTC m=+51.273122595" observedRunningTime="2025-09-12 17:23:56.011536559 +0000 UTC m=+52.432454407" watchObservedRunningTime="2025-09-12 17:23:59.949918125 +0000 UTC m=+56.370835981" Sep 12 17:23:59.950984 kubelet[3398]: I0912 17:23:59.950742 3398 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7dcbdd784d-pcpqj" podStartSLOduration=27.416894025 podStartE2EDuration="37.950714523s" podCreationTimestamp="2025-09-12 17:23:22 +0000 UTC" firstStartedPulling="2025-09-12 17:23:48.991072775 +0000 UTC m=+45.411990623" lastFinishedPulling="2025-09-12 17:23:59.524893273 +0000 UTC m=+55.945811121" observedRunningTime="2025-09-12 17:23:59.948714624 +0000 UTC m=+56.369632512" watchObservedRunningTime="2025-09-12 17:23:59.950714523 +0000 UTC m=+56.371632371" Sep 12 17:23:59.964566 containerd[1875]: time="2025-09-12T17:23:59.964400229Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f65827dbdfd6c84dd8f50f42a12afd80f589706e9aaf598113fe0537345e8fad\" id:\"3309f912d0aabf2fd413db711dd1f45771914551f4595077f6067227e60d2ddb\" pid:5694 exited_at:{seconds:1757697839 nanos:963543797}" Sep 12 17:24:00.995829 containerd[1875]: time="2025-09-12T17:24:00.995782994Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:24:00.998873 containerd[1875]: time="2025-09-12T17:24:00.998842129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 12 17:24:01.001906 containerd[1875]: time="2025-09-12T17:24:01.001864512Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:24:01.006495 containerd[1875]: time="2025-09-12T17:24:01.006306743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:24:01.006713 containerd[1875]: time="2025-09-12T17:24:01.006690828Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.48152176s" Sep 12 17:24:01.006791 containerd[1875]: time="2025-09-12T17:24:01.006777767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 12 17:24:01.007852 containerd[1875]: time="2025-09-12T17:24:01.007806881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:24:01.010300 containerd[1875]: time="2025-09-12T17:24:01.010271117Z" level=info msg="CreateContainer within sandbox \"602b8ba76a76d89d71afb51f8ddaf30605030e8f0a73a6b371e1f608c0b668fd\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:24:01.031595 containerd[1875]: time="2025-09-12T17:24:01.031563151Z" level=info msg="Container d56007aa92880446a602561d18990ceeb75f37089520c26d533c3a45b40cfe26: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:24:01.049194 containerd[1875]: time="2025-09-12T17:24:01.049169028Z" level=info msg="CreateContainer within sandbox \"602b8ba76a76d89d71afb51f8ddaf30605030e8f0a73a6b371e1f608c0b668fd\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d56007aa92880446a602561d18990ceeb75f37089520c26d533c3a45b40cfe26\"" Sep 12 17:24:01.050485 containerd[1875]: time="2025-09-12T17:24:01.050355652Z" level=info msg="StartContainer for \"d56007aa92880446a602561d18990ceeb75f37089520c26d533c3a45b40cfe26\"" Sep 12 17:24:01.051731 containerd[1875]: time="2025-09-12T17:24:01.051407368Z" level=info msg="connecting to shim d56007aa92880446a602561d18990ceeb75f37089520c26d533c3a45b40cfe26" address="unix:///run/containerd/s/41602551f11551526470aa221b1e0c0b6d799b4b48161dcf366acce85b39a21f" protocol=ttrpc version=3 Sep 12 17:24:01.068616 systemd[1]: Started cri-containerd-d56007aa92880446a602561d18990ceeb75f37089520c26d533c3a45b40cfe26.scope - libcontainer container d56007aa92880446a602561d18990ceeb75f37089520c26d533c3a45b40cfe26. Sep 12 17:24:01.244050 containerd[1875]: time="2025-09-12T17:24:01.243990595Z" level=info msg="StartContainer for \"d56007aa92880446a602561d18990ceeb75f37089520c26d533c3a45b40cfe26\" returns successfully" Sep 12 17:24:01.440158 containerd[1875]: time="2025-09-12T17:24:01.439757378Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:24:01.442453 containerd[1875]: time="2025-09-12T17:24:01.442427869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:24:01.443524 containerd[1875]: time="2025-09-12T17:24:01.443494641Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 435.648663ms" Sep 12 17:24:01.443524 containerd[1875]: time="2025-09-12T17:24:01.443522890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 17:24:01.444708 containerd[1875]: time="2025-09-12T17:24:01.444346150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:24:01.446659 containerd[1875]: time="2025-09-12T17:24:01.446633820Z" level=info msg="CreateContainer within sandbox \"a20a299416109be36c2289696b7a2a7acdd8703e7f7162a9af8918c82b9e016d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:24:01.466861 containerd[1875]: time="2025-09-12T17:24:01.466840929Z" level=info msg="Container 623ad906a9f9de1c82034d9262a6daf92efe3fceee10076b08823b3ac12c54f5: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:24:01.483864 containerd[1875]: time="2025-09-12T17:24:01.483837145Z" level=info msg="CreateContainer within sandbox \"a20a299416109be36c2289696b7a2a7acdd8703e7f7162a9af8918c82b9e016d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"623ad906a9f9de1c82034d9262a6daf92efe3fceee10076b08823b3ac12c54f5\"" Sep 12 17:24:01.484491 containerd[1875]: time="2025-09-12T17:24:01.484210358Z" level=info msg="StartContainer for \"623ad906a9f9de1c82034d9262a6daf92efe3fceee10076b08823b3ac12c54f5\"" Sep 12 17:24:01.484946 containerd[1875]: time="2025-09-12T17:24:01.484924630Z" level=info msg="connecting to shim 623ad906a9f9de1c82034d9262a6daf92efe3fceee10076b08823b3ac12c54f5" address="unix:///run/containerd/s/c52a2a22bbb356bbae451fc432294f1198c87fd28f350ac54fd3efbe8c6ffdee" protocol=ttrpc version=3 Sep 12 17:24:01.502585 systemd[1]: Started cri-containerd-623ad906a9f9de1c82034d9262a6daf92efe3fceee10076b08823b3ac12c54f5.scope - libcontainer container 623ad906a9f9de1c82034d9262a6daf92efe3fceee10076b08823b3ac12c54f5. Sep 12 17:24:01.548532 containerd[1875]: time="2025-09-12T17:24:01.548452768Z" level=info msg="StartContainer for \"623ad906a9f9de1c82034d9262a6daf92efe3fceee10076b08823b3ac12c54f5\" returns successfully" Sep 12 17:24:01.962135 kubelet[3398]: I0912 17:24:01.961958 3398 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7ff6f6f7b7-m8ldc" podStartSLOduration=32.589072305 podStartE2EDuration="42.961942711s" podCreationTimestamp="2025-09-12 17:23:19 +0000 UTC" firstStartedPulling="2025-09-12 17:23:51.071313826 +0000 UTC m=+47.492231666" lastFinishedPulling="2025-09-12 17:24:01.444184232 +0000 UTC m=+57.865102072" observedRunningTime="2025-09-12 17:24:01.961089282 +0000 UTC m=+58.382007122" watchObservedRunningTime="2025-09-12 17:24:01.961942711 +0000 UTC m=+58.382860583" Sep 12 17:24:04.112399 containerd[1875]: time="2025-09-12T17:24:04.112346222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:24:04.115017 containerd[1875]: time="2025-09-12T17:24:04.114989712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 12 17:24:04.118419 containerd[1875]: time="2025-09-12T17:24:04.118389899Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:24:04.125132 containerd[1875]: time="2025-09-12T17:24:04.124515955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:24:04.125132 containerd[1875]: time="2025-09-12T17:24:04.125028636Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 2.680657653s" Sep 12 17:24:04.125132 containerd[1875]: time="2025-09-12T17:24:04.125053109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 12 17:24:04.133141 containerd[1875]: time="2025-09-12T17:24:04.133116406Z" level=info msg="CreateContainer within sandbox \"602b8ba76a76d89d71afb51f8ddaf30605030e8f0a73a6b371e1f608c0b668fd\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:24:04.148485 containerd[1875]: time="2025-09-12T17:24:04.147593249Z" level=info msg="Container 30e2486462d5bb1179fb36b96ac4498086e76dd6d9cbac5c5792f160979b9b99: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:24:04.150914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount291442698.mount: Deactivated successfully. Sep 12 17:24:04.173298 containerd[1875]: time="2025-09-12T17:24:04.173259296Z" level=info msg="CreateContainer within sandbox \"602b8ba76a76d89d71afb51f8ddaf30605030e8f0a73a6b371e1f608c0b668fd\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"30e2486462d5bb1179fb36b96ac4498086e76dd6d9cbac5c5792f160979b9b99\"" Sep 12 17:24:04.174089 containerd[1875]: time="2025-09-12T17:24:04.174063203Z" level=info msg="StartContainer for \"30e2486462d5bb1179fb36b96ac4498086e76dd6d9cbac5c5792f160979b9b99\"" Sep 12 17:24:04.175424 containerd[1875]: time="2025-09-12T17:24:04.175265396Z" level=info msg="connecting to shim 30e2486462d5bb1179fb36b96ac4498086e76dd6d9cbac5c5792f160979b9b99" address="unix:///run/containerd/s/41602551f11551526470aa221b1e0c0b6d799b4b48161dcf366acce85b39a21f" protocol=ttrpc version=3 Sep 12 17:24:04.197591 systemd[1]: Started cri-containerd-30e2486462d5bb1179fb36b96ac4498086e76dd6d9cbac5c5792f160979b9b99.scope - libcontainer container 30e2486462d5bb1179fb36b96ac4498086e76dd6d9cbac5c5792f160979b9b99. Sep 12 17:24:04.239678 containerd[1875]: time="2025-09-12T17:24:04.239611210Z" level=info msg="StartContainer for \"30e2486462d5bb1179fb36b96ac4498086e76dd6d9cbac5c5792f160979b9b99\" returns successfully" Sep 12 17:24:04.821927 kubelet[3398]: I0912 17:24:04.821885 3398 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:24:04.821927 kubelet[3398]: I0912 17:24:04.821933 3398 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:24:04.975032 kubelet[3398]: I0912 17:24:04.974972 3398 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-vj2b6" podStartSLOduration=28.988178022 podStartE2EDuration="42.974956212s" podCreationTimestamp="2025-09-12 17:23:22 +0000 UTC" firstStartedPulling="2025-09-12 17:23:50.140051715 +0000 UTC m=+46.560969563" lastFinishedPulling="2025-09-12 17:24:04.126829913 +0000 UTC m=+60.547747753" observedRunningTime="2025-09-12 17:24:04.973245314 +0000 UTC m=+61.394163154" watchObservedRunningTime="2025-09-12 17:24:04.974956212 +0000 UTC m=+61.395874052" Sep 12 17:24:11.422908 kubelet[3398]: I0912 17:24:11.422763 3398 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:24:12.919305 containerd[1875]: time="2025-09-12T17:24:12.919259945Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f02e13a6b5c334c95bd544250fb66979f96d32bff376192c35e9982ae47c57f8\" id:\"1b67c4202bbb51eb094118684d4b0d91603da2b2299de5d013130175ac1bbd40\" pid:5841 exited_at:{seconds:1757697852 nanos:918859789}" Sep 12 17:24:23.981892 containerd[1875]: time="2025-09-12T17:24:23.981850523Z" level=info msg="TaskExit event in podsandbox handler container_id:\"703c6f9e362392092294f983a9984718b68dac0979eef70de8148d49db9f0f0c\" id:\"c4d69bb9d932b0208bae93a06bc5ad389d584ab3bf9428688da5605aa5d05150\" pid:5866 exited_at:{seconds:1757697863 nanos:981552144}" Sep 12 17:24:29.964199 containerd[1875]: time="2025-09-12T17:24:29.964148619Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f65827dbdfd6c84dd8f50f42a12afd80f589706e9aaf598113fe0537345e8fad\" id:\"335fda2834df9bed06b803810dc755d9b3899585ddfe538cb29a649ada0a5aeb\" pid:5896 exited_at:{seconds:1757697869 nanos:963923243}" Sep 12 17:24:42.918164 containerd[1875]: time="2025-09-12T17:24:42.918081031Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f02e13a6b5c334c95bd544250fb66979f96d32bff376192c35e9982ae47c57f8\" id:\"22c322e523453279fbcdb70a2e524bdb572bf78a8d644eba47b16bc40ce2426a\" pid:5921 exited_at:{seconds:1757697882 nanos:917734371}" Sep 12 17:24:43.153873 containerd[1875]: time="2025-09-12T17:24:43.153820052Z" level=info msg="TaskExit event in podsandbox handler container_id:\"703c6f9e362392092294f983a9984718b68dac0979eef70de8148d49db9f0f0c\" id:\"91ae5bfdf8870bb16bed3228369c0d7eef34cfc2d72344fd6514513f2ba5eba3\" pid:5945 exited_at:{seconds:1757697883 nanos:153500089}" Sep 12 17:24:52.442062 containerd[1875]: time="2025-09-12T17:24:52.441949651Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f65827dbdfd6c84dd8f50f42a12afd80f589706e9aaf598113fe0537345e8fad\" id:\"217602a65060dee3619f06915daa5ef89a050ec17931da5846f6a813167131c5\" pid:5971 exited_at:{seconds:1757697892 nanos:441662825}" Sep 12 17:24:53.963730 containerd[1875]: time="2025-09-12T17:24:53.963623418Z" level=info msg="TaskExit event in podsandbox handler container_id:\"703c6f9e362392092294f983a9984718b68dac0979eef70de8148d49db9f0f0c\" id:\"8974ae8054c42afdbcd069ad0b6c35f164f766d11a2269cd385002bfe3bdebf9\" pid:5993 exited_at:{seconds:1757697893 nanos:963355161}" Sep 12 17:24:59.959196 containerd[1875]: time="2025-09-12T17:24:59.959161976Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f65827dbdfd6c84dd8f50f42a12afd80f589706e9aaf598113fe0537345e8fad\" id:\"e0553e3ce9016dd6d851b5e218d13dc8f900124317a324009c5c5d81fe66c73d\" pid:6019 exited_at:{seconds:1757697899 nanos:959003611}" Sep 12 17:25:12.915715 containerd[1875]: time="2025-09-12T17:25:12.915670376Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f02e13a6b5c334c95bd544250fb66979f96d32bff376192c35e9982ae47c57f8\" id:\"e26e8dde5a6c7e8b684c1ad701f2d5bc178ba0b893c543aaba6ca3f8290a4900\" pid:6053 exited_at:{seconds:1757697912 nanos:915327817}" Sep 12 17:25:20.935937 systemd[1]: Started sshd@7-10.200.20.11:22-10.200.16.10:59048.service - OpenSSH per-connection server daemon (10.200.16.10:59048). Sep 12 17:25:21.397837 sshd[6085]: Accepted publickey for core from 10.200.16.10 port 59048 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:25:21.400094 sshd-session[6085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:25:21.404275 systemd-logind[1846]: New session 10 of user core. Sep 12 17:25:21.407593 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:25:21.792457 sshd[6088]: Connection closed by 10.200.16.10 port 59048 Sep 12 17:25:21.792978 sshd-session[6085]: pam_unix(sshd:session): session closed for user core Sep 12 17:25:21.796284 systemd[1]: sshd@7-10.200.20.11:22-10.200.16.10:59048.service: Deactivated successfully. Sep 12 17:25:21.798033 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:25:21.799102 systemd-logind[1846]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:25:21.800535 systemd-logind[1846]: Removed session 10. Sep 12 17:25:23.962691 containerd[1875]: time="2025-09-12T17:25:23.962646622Z" level=info msg="TaskExit event in podsandbox handler container_id:\"703c6f9e362392092294f983a9984718b68dac0979eef70de8148d49db9f0f0c\" id:\"9ab02f29de23b4bf980b31735aef500a5e318f45ec5b86443e5d93a57abcb44b\" pid:6120 exited_at:{seconds:1757697923 nanos:962313004}" Sep 12 17:25:26.886694 systemd[1]: Started sshd@8-10.200.20.11:22-10.200.16.10:59056.service - OpenSSH per-connection server daemon (10.200.16.10:59056). Sep 12 17:25:27.379041 sshd[6132]: Accepted publickey for core from 10.200.16.10 port 59056 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:25:27.380163 sshd-session[6132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:25:27.384766 systemd-logind[1846]: New session 11 of user core. Sep 12 17:25:27.391593 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:25:27.814064 sshd[6135]: Connection closed by 10.200.16.10 port 59056 Sep 12 17:25:27.815891 sshd-session[6132]: pam_unix(sshd:session): session closed for user core Sep 12 17:25:27.820457 systemd[1]: sshd@8-10.200.20.11:22-10.200.16.10:59056.service: Deactivated successfully. Sep 12 17:25:27.824106 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:25:27.825212 systemd-logind[1846]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:25:27.827148 systemd-logind[1846]: Removed session 11. Sep 12 17:25:29.958137 containerd[1875]: time="2025-09-12T17:25:29.958095390Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f65827dbdfd6c84dd8f50f42a12afd80f589706e9aaf598113fe0537345e8fad\" id:\"8555fc3b46365400ae24831b13d814693419513e50363fbc54fe4a73d5ba1932\" pid:6160 exited_at:{seconds:1757697929 nanos:957734022}" Sep 12 17:25:32.905815 systemd[1]: Started sshd@9-10.200.20.11:22-10.200.16.10:54706.service - OpenSSH per-connection server daemon (10.200.16.10:54706). Sep 12 17:25:33.399399 sshd[6171]: Accepted publickey for core from 10.200.16.10 port 54706 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:25:33.400559 sshd-session[6171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:25:33.404281 systemd-logind[1846]: New session 12 of user core. Sep 12 17:25:33.412755 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:25:33.806208 sshd[6175]: Connection closed by 10.200.16.10 port 54706 Sep 12 17:25:33.806746 sshd-session[6171]: pam_unix(sshd:session): session closed for user core Sep 12 17:25:33.810291 systemd[1]: sshd@9-10.200.20.11:22-10.200.16.10:54706.service: Deactivated successfully. Sep 12 17:25:33.812201 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:25:33.813285 systemd-logind[1846]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:25:33.814322 systemd-logind[1846]: Removed session 12. Sep 12 17:25:33.895389 systemd[1]: Started sshd@10-10.200.20.11:22-10.200.16.10:54718.service - OpenSSH per-connection server daemon (10.200.16.10:54718). Sep 12 17:25:34.386448 sshd[6187]: Accepted publickey for core from 10.200.16.10 port 54718 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:25:34.387526 sshd-session[6187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:25:34.392160 systemd-logind[1846]: New session 13 of user core. Sep 12 17:25:34.399634 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:25:34.815733 sshd[6190]: Connection closed by 10.200.16.10 port 54718 Sep 12 17:25:34.816265 sshd-session[6187]: pam_unix(sshd:session): session closed for user core Sep 12 17:25:34.819788 systemd[1]: sshd@10-10.200.20.11:22-10.200.16.10:54718.service: Deactivated successfully. Sep 12 17:25:34.822109 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:25:34.823458 systemd-logind[1846]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:25:34.824835 systemd-logind[1846]: Removed session 13. Sep 12 17:25:34.905492 systemd[1]: Started sshd@11-10.200.20.11:22-10.200.16.10:54728.service - OpenSSH per-connection server daemon (10.200.16.10:54728). Sep 12 17:25:35.405929 sshd[6200]: Accepted publickey for core from 10.200.16.10 port 54728 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:25:35.407381 sshd-session[6200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:25:35.411287 systemd-logind[1846]: New session 14 of user core. Sep 12 17:25:35.418575 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:25:35.801400 sshd[6203]: Connection closed by 10.200.16.10 port 54728 Sep 12 17:25:35.802040 sshd-session[6200]: pam_unix(sshd:session): session closed for user core Sep 12 17:25:35.805231 systemd[1]: sshd@11-10.200.20.11:22-10.200.16.10:54728.service: Deactivated successfully. Sep 12 17:25:35.806893 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:25:35.807744 systemd-logind[1846]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:25:35.809174 systemd-logind[1846]: Removed session 14. Sep 12 17:25:40.890703 systemd[1]: Started sshd@12-10.200.20.11:22-10.200.16.10:32842.service - OpenSSH per-connection server daemon (10.200.16.10:32842). Sep 12 17:25:41.382663 sshd[6217]: Accepted publickey for core from 10.200.16.10 port 32842 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:25:41.383378 sshd-session[6217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:25:41.387082 systemd-logind[1846]: New session 15 of user core. Sep 12 17:25:41.391577 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:25:41.792156 sshd[6224]: Connection closed by 10.200.16.10 port 32842 Sep 12 17:25:41.792066 sshd-session[6217]: pam_unix(sshd:session): session closed for user core Sep 12 17:25:41.795407 systemd[1]: sshd@12-10.200.20.11:22-10.200.16.10:32842.service: Deactivated successfully. Sep 12 17:25:41.799414 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:25:41.800313 systemd-logind[1846]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:25:41.801455 systemd-logind[1846]: Removed session 15. Sep 12 17:25:42.912964 containerd[1875]: time="2025-09-12T17:25:42.912921699Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f02e13a6b5c334c95bd544250fb66979f96d32bff376192c35e9982ae47c57f8\" id:\"4ff9ef7a4a80645a72c0a1dd9bd177612f7b193b9f941e14e135f408e179b594\" pid:6247 exit_status:1 exited_at:{seconds:1757697942 nanos:912511062}" Sep 12 17:25:43.150411 containerd[1875]: time="2025-09-12T17:25:43.150356507Z" level=info msg="TaskExit event in podsandbox handler container_id:\"703c6f9e362392092294f983a9984718b68dac0979eef70de8148d49db9f0f0c\" id:\"dfe091fe53b7ad0166e503e297346031018c460bc96b5e36c9c3d8bf0a7b2a96\" pid:6269 exited_at:{seconds:1757697943 nanos:150119195}" Sep 12 17:25:46.881670 systemd[1]: Started sshd@13-10.200.20.11:22-10.200.16.10:32856.service - OpenSSH per-connection server daemon (10.200.16.10:32856). Sep 12 17:25:47.377086 sshd[6279]: Accepted publickey for core from 10.200.16.10 port 32856 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:25:47.379103 sshd-session[6279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:25:47.383284 systemd-logind[1846]: New session 16 of user core. Sep 12 17:25:47.387595 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:25:47.771948 sshd[6282]: Connection closed by 10.200.16.10 port 32856 Sep 12 17:25:47.772540 sshd-session[6279]: pam_unix(sshd:session): session closed for user core Sep 12 17:25:47.776058 systemd[1]: sshd@13-10.200.20.11:22-10.200.16.10:32856.service: Deactivated successfully. Sep 12 17:25:47.778012 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:25:47.779010 systemd-logind[1846]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:25:47.781119 systemd-logind[1846]: Removed session 16. Sep 12 17:25:52.437286 containerd[1875]: time="2025-09-12T17:25:52.436972864Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f65827dbdfd6c84dd8f50f42a12afd80f589706e9aaf598113fe0537345e8fad\" id:\"a59a0a3227b3f41c312dc871df30ec029e41e530692732a9da8cf889bcaff659\" pid:6304 exited_at:{seconds:1757697952 nanos:436203784}" Sep 12 17:25:52.854361 systemd[1]: Started sshd@14-10.200.20.11:22-10.200.16.10:59436.service - OpenSSH per-connection server daemon (10.200.16.10:59436). Sep 12 17:25:53.304373 sshd[6315]: Accepted publickey for core from 10.200.16.10 port 59436 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:25:53.305447 sshd-session[6315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:25:53.308857 systemd-logind[1846]: New session 17 of user core. Sep 12 17:25:53.317754 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:25:53.693492 sshd[6318]: Connection closed by 10.200.16.10 port 59436 Sep 12 17:25:53.693306 sshd-session[6315]: pam_unix(sshd:session): session closed for user core Sep 12 17:25:53.696065 systemd[1]: sshd@14-10.200.20.11:22-10.200.16.10:59436.service: Deactivated successfully. Sep 12 17:25:53.698218 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:25:53.700728 systemd-logind[1846]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:25:53.702776 systemd-logind[1846]: Removed session 17. Sep 12 17:25:53.966069 containerd[1875]: time="2025-09-12T17:25:53.965949583Z" level=info msg="TaskExit event in podsandbox handler container_id:\"703c6f9e362392092294f983a9984718b68dac0979eef70de8148d49db9f0f0c\" id:\"e77654c52a3584f3ec8e593a07edc99fd537b3fd6beb8fe979b8313d7547bfef\" pid:6339 exited_at:{seconds:1757697953 nanos:965375278}" Sep 12 17:25:58.775057 systemd[1]: Started sshd@15-10.200.20.11:22-10.200.16.10:59452.service - OpenSSH per-connection server daemon (10.200.16.10:59452). Sep 12 17:25:59.222315 sshd[6349]: Accepted publickey for core from 10.200.16.10 port 59452 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:25:59.223809 sshd-session[6349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:25:59.228072 systemd-logind[1846]: New session 18 of user core. Sep 12 17:25:59.232592 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:25:59.613641 sshd[6352]: Connection closed by 10.200.16.10 port 59452 Sep 12 17:25:59.614679 sshd-session[6349]: pam_unix(sshd:session): session closed for user core Sep 12 17:25:59.618701 systemd[1]: sshd@15-10.200.20.11:22-10.200.16.10:59452.service: Deactivated successfully. Sep 12 17:25:59.623093 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:25:59.625687 systemd-logind[1846]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:25:59.628629 systemd-logind[1846]: Removed session 18. Sep 12 17:25:59.699658 systemd[1]: Started sshd@16-10.200.20.11:22-10.200.16.10:59460.service - OpenSSH per-connection server daemon (10.200.16.10:59460). Sep 12 17:25:59.977436 containerd[1875]: time="2025-09-12T17:25:59.977063201Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f65827dbdfd6c84dd8f50f42a12afd80f589706e9aaf598113fe0537345e8fad\" id:\"10c6ef9b4cf94759d76ed3548597ffb5cc1563d47c2b8fd3351841193367f67f\" pid:6379 exited_at:{seconds:1757697959 nanos:975685688}" Sep 12 17:26:00.153241 sshd[6364]: Accepted publickey for core from 10.200.16.10 port 59460 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:26:00.154357 sshd-session[6364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:26:00.158365 systemd-logind[1846]: New session 19 of user core. Sep 12 17:26:00.164595 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:26:00.657509 sshd[6388]: Connection closed by 10.200.16.10 port 59460 Sep 12 17:26:00.658015 sshd-session[6364]: pam_unix(sshd:session): session closed for user core Sep 12 17:26:00.661269 systemd-logind[1846]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:26:00.661550 systemd[1]: sshd@16-10.200.20.11:22-10.200.16.10:59460.service: Deactivated successfully. Sep 12 17:26:00.663926 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:26:00.666082 systemd-logind[1846]: Removed session 19. Sep 12 17:26:00.749874 systemd[1]: Started sshd@17-10.200.20.11:22-10.200.16.10:38548.service - OpenSSH per-connection server daemon (10.200.16.10:38548). Sep 12 17:26:01.242492 sshd[6397]: Accepted publickey for core from 10.200.16.10 port 38548 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:26:01.243558 sshd-session[6397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:26:01.246993 systemd-logind[1846]: New session 20 of user core. Sep 12 17:26:01.254575 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:26:02.098507 sshd[6400]: Connection closed by 10.200.16.10 port 38548 Sep 12 17:26:02.099057 sshd-session[6397]: pam_unix(sshd:session): session closed for user core Sep 12 17:26:02.102624 systemd[1]: sshd@17-10.200.20.11:22-10.200.16.10:38548.service: Deactivated successfully. Sep 12 17:26:02.104153 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:26:02.104935 systemd-logind[1846]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:26:02.106055 systemd-logind[1846]: Removed session 20. Sep 12 17:26:02.180676 systemd[1]: Started sshd@18-10.200.20.11:22-10.200.16.10:38552.service - OpenSSH per-connection server daemon (10.200.16.10:38552). Sep 12 17:26:02.636728 sshd[6417]: Accepted publickey for core from 10.200.16.10 port 38552 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:26:02.638298 sshd-session[6417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:26:02.642133 systemd-logind[1846]: New session 21 of user core. Sep 12 17:26:02.646599 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:26:03.102284 sshd[6420]: Connection closed by 10.200.16.10 port 38552 Sep 12 17:26:03.102901 sshd-session[6417]: pam_unix(sshd:session): session closed for user core Sep 12 17:26:03.106062 systemd[1]: sshd@18-10.200.20.11:22-10.200.16.10:38552.service: Deactivated successfully. Sep 12 17:26:03.109337 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:26:03.110605 systemd-logind[1846]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:26:03.111999 systemd-logind[1846]: Removed session 21. Sep 12 17:26:03.182781 systemd[1]: Started sshd@19-10.200.20.11:22-10.200.16.10:38556.service - OpenSSH per-connection server daemon (10.200.16.10:38556). Sep 12 17:26:03.633403 sshd[6430]: Accepted publickey for core from 10.200.16.10 port 38556 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:26:03.634519 sshd-session[6430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:26:03.638174 systemd-logind[1846]: New session 22 of user core. Sep 12 17:26:03.644589 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:26:04.031139 sshd[6433]: Connection closed by 10.200.16.10 port 38556 Sep 12 17:26:04.031781 sshd-session[6430]: pam_unix(sshd:session): session closed for user core Sep 12 17:26:04.034918 systemd[1]: sshd@19-10.200.20.11:22-10.200.16.10:38556.service: Deactivated successfully. Sep 12 17:26:04.036558 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:26:04.037226 systemd-logind[1846]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:26:04.039270 systemd-logind[1846]: Removed session 22. Sep 12 17:26:09.113487 systemd[1]: Started sshd@20-10.200.20.11:22-10.200.16.10:38560.service - OpenSSH per-connection server daemon (10.200.16.10:38560). Sep 12 17:26:09.567341 sshd[6449]: Accepted publickey for core from 10.200.16.10 port 38560 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:26:09.568083 sshd-session[6449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:26:09.572514 systemd-logind[1846]: New session 23 of user core. Sep 12 17:26:09.580605 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:26:09.949887 sshd[6452]: Connection closed by 10.200.16.10 port 38560 Sep 12 17:26:09.950541 sshd-session[6449]: pam_unix(sshd:session): session closed for user core Sep 12 17:26:09.953708 systemd[1]: sshd@20-10.200.20.11:22-10.200.16.10:38560.service: Deactivated successfully. Sep 12 17:26:09.955367 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:26:09.956022 systemd-logind[1846]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:26:09.957568 systemd-logind[1846]: Removed session 23. Sep 12 17:26:12.915794 containerd[1875]: time="2025-09-12T17:26:12.915751113Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f02e13a6b5c334c95bd544250fb66979f96d32bff376192c35e9982ae47c57f8\" id:\"3bfd49db4f43357062cd534123c3e2ebedf68dd6ed6a0c82492d91e1a636c8de\" pid:6478 exited_at:{seconds:1757697972 nanos:915259435}" Sep 12 17:26:15.040383 systemd[1]: Started sshd@21-10.200.20.11:22-10.200.16.10:48326.service - OpenSSH per-connection server daemon (10.200.16.10:48326). Sep 12 17:26:15.536016 sshd[6491]: Accepted publickey for core from 10.200.16.10 port 48326 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:26:15.537213 sshd-session[6491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:26:15.540802 systemd-logind[1846]: New session 24 of user core. Sep 12 17:26:15.550140 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:26:15.961293 sshd[6494]: Connection closed by 10.200.16.10 port 48326 Sep 12 17:26:15.961826 sshd-session[6491]: pam_unix(sshd:session): session closed for user core Sep 12 17:26:15.966042 systemd-logind[1846]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:26:15.966495 systemd[1]: sshd@21-10.200.20.11:22-10.200.16.10:48326.service: Deactivated successfully. Sep 12 17:26:15.969841 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:26:15.971807 systemd-logind[1846]: Removed session 24. Sep 12 17:26:21.054375 systemd[1]: Started sshd@22-10.200.20.11:22-10.200.16.10:39976.service - OpenSSH per-connection server daemon (10.200.16.10:39976). Sep 12 17:26:21.554765 sshd[6506]: Accepted publickey for core from 10.200.16.10 port 39976 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:26:21.556172 sshd-session[6506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:26:21.560497 systemd-logind[1846]: New session 25 of user core. Sep 12 17:26:21.575723 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:26:21.946423 sshd[6509]: Connection closed by 10.200.16.10 port 39976 Sep 12 17:26:21.947005 sshd-session[6506]: pam_unix(sshd:session): session closed for user core Sep 12 17:26:21.950542 systemd[1]: sshd@22-10.200.20.11:22-10.200.16.10:39976.service: Deactivated successfully. Sep 12 17:26:21.952681 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:26:21.955654 systemd-logind[1846]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:26:21.956656 systemd-logind[1846]: Removed session 25. Sep 12 17:26:23.966025 containerd[1875]: time="2025-09-12T17:26:23.965977108Z" level=info msg="TaskExit event in podsandbox handler container_id:\"703c6f9e362392092294f983a9984718b68dac0979eef70de8148d49db9f0f0c\" id:\"21e24c42192c9891c187424f476b7573e85def020507a47b031dda37136eb5a8\" pid:6533 exited_at:{seconds:1757697983 nanos:965686187}" Sep 12 17:26:27.034440 systemd[1]: Started sshd@23-10.200.20.11:22-10.200.16.10:39980.service - OpenSSH per-connection server daemon (10.200.16.10:39980). Sep 12 17:26:27.528672 sshd[6549]: Accepted publickey for core from 10.200.16.10 port 39980 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:26:27.529671 sshd-session[6549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:26:27.534719 systemd-logind[1846]: New session 26 of user core. Sep 12 17:26:27.539639 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 17:26:27.938909 sshd[6552]: Connection closed by 10.200.16.10 port 39980 Sep 12 17:26:27.940685 sshd-session[6549]: pam_unix(sshd:session): session closed for user core Sep 12 17:26:27.944696 systemd[1]: sshd@23-10.200.20.11:22-10.200.16.10:39980.service: Deactivated successfully. Sep 12 17:26:27.946595 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 17:26:27.949893 systemd-logind[1846]: Session 26 logged out. Waiting for processes to exit. Sep 12 17:26:27.951540 systemd-logind[1846]: Removed session 26. Sep 12 17:26:29.962156 containerd[1875]: time="2025-09-12T17:26:29.962107173Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f65827dbdfd6c84dd8f50f42a12afd80f589706e9aaf598113fe0537345e8fad\" id:\"b095961ecb7c92029cf796b4c727098699d23a1c26dfb506d86e7f2e0d0083d0\" pid:6574 exited_at:{seconds:1757697989 nanos:961625479}" Sep 12 17:26:33.028992 systemd[1]: Started sshd@24-10.200.20.11:22-10.200.16.10:38244.service - OpenSSH per-connection server daemon (10.200.16.10:38244). Sep 12 17:26:33.520637 sshd[6586]: Accepted publickey for core from 10.200.16.10 port 38244 ssh2: RSA SHA256:+on2THTR/nRp7vYd/q00sis2kB6WPhgipQlvfvqeQ7E Sep 12 17:26:33.521711 sshd-session[6586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:26:33.525321 systemd-logind[1846]: New session 27 of user core. Sep 12 17:26:33.530752 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 17:26:33.929505 sshd[6589]: Connection closed by 10.200.16.10 port 38244 Sep 12 17:26:33.929137 sshd-session[6586]: pam_unix(sshd:session): session closed for user core Sep 12 17:26:33.932640 systemd-logind[1846]: Session 27 logged out. Waiting for processes to exit. Sep 12 17:26:33.933275 systemd[1]: sshd@24-10.200.20.11:22-10.200.16.10:38244.service: Deactivated successfully. Sep 12 17:26:33.936176 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 17:26:33.937132 systemd-logind[1846]: Removed session 27.