Oct 13 00:26:08.035288 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Oct 13 00:26:08.035305 kernel: Linux version 6.12.51-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Sun Oct 12 22:32:01 -00 2025 Oct 13 00:26:08.035311 kernel: KASLR enabled Oct 13 00:26:08.035315 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Oct 13 00:26:08.035319 kernel: printk: legacy bootconsole [pl11] enabled Oct 13 00:26:08.035324 kernel: efi: EFI v2.7 by EDK II Oct 13 00:26:08.035329 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89d018 RNG=0x3f979998 MEMRESERVE=0x3db7d598 Oct 13 00:26:08.035333 kernel: random: crng init done Oct 13 00:26:08.035337 kernel: secureboot: Secure boot disabled Oct 13 00:26:08.035340 kernel: ACPI: Early table checksum verification disabled Oct 13 00:26:08.035345 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Oct 13 00:26:08.035349 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 00:26:08.035352 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 00:26:08.035357 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Oct 13 00:26:08.035362 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 00:26:08.035367 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 00:26:08.035371 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 00:26:08.035375 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 00:26:08.035380 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 00:26:08.035385 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 00:26:08.035389 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Oct 13 00:26:08.035393 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 13 00:26:08.035398 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Oct 13 00:26:08.035402 kernel: ACPI: Use ACPI SPCR as default console: No Oct 13 00:26:08.035406 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Oct 13 00:26:08.035410 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Oct 13 00:26:08.035414 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Oct 13 00:26:08.035419 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Oct 13 00:26:08.035423 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Oct 13 00:26:08.035427 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Oct 13 00:26:08.035432 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Oct 13 00:26:08.035436 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Oct 13 00:26:08.035440 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Oct 13 00:26:08.035445 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Oct 13 00:26:08.035449 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Oct 13 00:26:08.035453 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Oct 13 00:26:08.035457 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Oct 13 00:26:08.035461 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Oct 13 00:26:08.035465 kernel: Zone ranges: Oct 13 00:26:08.035470 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Oct 13 00:26:08.035476 kernel: DMA32 empty Oct 13 00:26:08.035481 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Oct 13 00:26:08.035485 kernel: Device empty Oct 13 00:26:08.035489 kernel: Movable zone start for each node Oct 13 00:26:08.035494 kernel: Early memory node ranges Oct 13 00:26:08.035498 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Oct 13 00:26:08.035503 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Oct 13 00:26:08.035508 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Oct 13 00:26:08.035512 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Oct 13 00:26:08.035517 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Oct 13 00:26:08.035521 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Oct 13 00:26:08.035525 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Oct 13 00:26:08.035530 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Oct 13 00:26:08.035534 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Oct 13 00:26:08.035539 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Oct 13 00:26:08.035543 kernel: psci: probing for conduit method from ACPI. Oct 13 00:26:08.035547 kernel: psci: PSCIv1.3 detected in firmware. Oct 13 00:26:08.035552 kernel: psci: Using standard PSCI v0.2 function IDs Oct 13 00:26:08.035557 kernel: psci: MIGRATE_INFO_TYPE not supported. Oct 13 00:26:08.035561 kernel: psci: SMC Calling Convention v1.4 Oct 13 00:26:08.035565 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Oct 13 00:26:08.035570 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Oct 13 00:26:08.035574 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Oct 13 00:26:08.035578 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Oct 13 00:26:08.035583 kernel: pcpu-alloc: [0] 0 [0] 1 Oct 13 00:26:08.035587 kernel: Detected PIPT I-cache on CPU0 Oct 13 00:26:08.035592 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Oct 13 00:26:08.035596 kernel: CPU features: detected: GIC system register CPU interface Oct 13 00:26:08.035600 kernel: CPU features: detected: Spectre-v4 Oct 13 00:26:08.035605 kernel: CPU features: detected: Spectre-BHB Oct 13 00:26:08.035610 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 13 00:26:08.035614 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 13 00:26:08.035618 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Oct 13 00:26:08.035623 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 13 00:26:08.035627 kernel: alternatives: applying boot alternatives Oct 13 00:26:08.035633 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=37fc523060a9b8894388e25ab0f082059dd744d472a2b8577211d4b3dd66a910 Oct 13 00:26:08.035637 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 13 00:26:08.035642 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 13 00:26:08.035646 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 13 00:26:08.035651 kernel: Fallback order for Node 0: 0 Oct 13 00:26:08.035656 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Oct 13 00:26:08.035660 kernel: Policy zone: Normal Oct 13 00:26:08.035665 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 13 00:26:08.035669 kernel: software IO TLB: area num 2. Oct 13 00:26:08.035673 kernel: software IO TLB: mapped [mem 0x00000000359a0000-0x00000000399a0000] (64MB) Oct 13 00:26:08.035678 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 13 00:26:08.035682 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 13 00:26:08.035687 kernel: rcu: RCU event tracing is enabled. Oct 13 00:26:08.035691 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 13 00:26:08.035696 kernel: Trampoline variant of Tasks RCU enabled. Oct 13 00:26:08.035700 kernel: Tracing variant of Tasks RCU enabled. Oct 13 00:26:08.035705 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 13 00:26:08.035710 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 13 00:26:08.035714 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 13 00:26:08.035719 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 13 00:26:08.035723 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 13 00:26:08.035728 kernel: GICv3: 960 SPIs implemented Oct 13 00:26:08.035732 kernel: GICv3: 0 Extended SPIs implemented Oct 13 00:26:08.035736 kernel: Root IRQ handler: gic_handle_irq Oct 13 00:26:08.035740 kernel: GICv3: GICv3 features: 16 PPIs, RSS Oct 13 00:26:08.035745 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Oct 13 00:26:08.035749 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Oct 13 00:26:08.035754 kernel: ITS: No ITS available, not enabling LPIs Oct 13 00:26:08.035759 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 13 00:26:08.035763 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Oct 13 00:26:08.035768 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 13 00:26:08.035773 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Oct 13 00:26:08.035777 kernel: Console: colour dummy device 80x25 Oct 13 00:26:08.035782 kernel: printk: legacy console [tty1] enabled Oct 13 00:26:08.035786 kernel: ACPI: Core revision 20240827 Oct 13 00:26:08.035791 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Oct 13 00:26:08.035796 kernel: pid_max: default: 32768 minimum: 301 Oct 13 00:26:08.035800 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 13 00:26:08.035805 kernel: landlock: Up and running. Oct 13 00:26:08.035810 kernel: SELinux: Initializing. Oct 13 00:26:08.035814 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 00:26:08.035819 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 00:26:08.035824 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Oct 13 00:26:08.035831 kernel: Hyper-V: Host Build 10.0.26102.1083-1-0 Oct 13 00:26:08.035837 kernel: Hyper-V: enabling crash_kexec_post_notifiers Oct 13 00:26:08.035841 kernel: rcu: Hierarchical SRCU implementation. Oct 13 00:26:08.035846 kernel: rcu: Max phase no-delay instances is 400. Oct 13 00:26:08.035851 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 13 00:26:08.035856 kernel: Remapping and enabling EFI services. Oct 13 00:26:08.035860 kernel: smp: Bringing up secondary CPUs ... Oct 13 00:26:08.035866 kernel: Detected PIPT I-cache on CPU1 Oct 13 00:26:08.035871 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Oct 13 00:26:08.035876 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Oct 13 00:26:08.035880 kernel: smp: Brought up 1 node, 2 CPUs Oct 13 00:26:08.035885 kernel: SMP: Total of 2 processors activated. Oct 13 00:26:08.035890 kernel: CPU: All CPU(s) started at EL1 Oct 13 00:26:08.035895 kernel: CPU features: detected: 32-bit EL0 Support Oct 13 00:26:08.035900 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Oct 13 00:26:08.035905 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 13 00:26:08.035910 kernel: CPU features: detected: Common not Private translations Oct 13 00:26:08.035915 kernel: CPU features: detected: CRC32 instructions Oct 13 00:26:08.035919 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Oct 13 00:26:08.035924 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 13 00:26:08.035929 kernel: CPU features: detected: LSE atomic instructions Oct 13 00:26:08.035935 kernel: CPU features: detected: Privileged Access Never Oct 13 00:26:08.035939 kernel: CPU features: detected: Speculation barrier (SB) Oct 13 00:26:08.035944 kernel: CPU features: detected: TLB range maintenance instructions Oct 13 00:26:08.035949 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 13 00:26:08.035954 kernel: CPU features: detected: Scalable Vector Extension Oct 13 00:26:08.035958 kernel: alternatives: applying system-wide alternatives Oct 13 00:26:08.035963 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Oct 13 00:26:08.035968 kernel: SVE: maximum available vector length 16 bytes per vector Oct 13 00:26:08.035973 kernel: SVE: default vector length 16 bytes per vector Oct 13 00:26:08.035979 kernel: Memory: 3953532K/4194160K available (11136K kernel code, 2450K rwdata, 9076K rodata, 38976K init, 1038K bss, 219440K reserved, 16384K cma-reserved) Oct 13 00:26:08.035984 kernel: devtmpfs: initialized Oct 13 00:26:08.035989 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 13 00:26:08.035993 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 13 00:26:08.035998 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 13 00:26:08.036003 kernel: 0 pages in range for non-PLT usage Oct 13 00:26:08.036008 kernel: 508560 pages in range for PLT usage Oct 13 00:26:08.036013 kernel: pinctrl core: initialized pinctrl subsystem Oct 13 00:26:08.036018 kernel: SMBIOS 3.1.0 present. Oct 13 00:26:08.036023 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Oct 13 00:26:08.036028 kernel: DMI: Memory slots populated: 2/2 Oct 13 00:26:08.036032 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 13 00:26:08.036037 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 13 00:26:08.036042 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 13 00:26:08.036047 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 13 00:26:08.036052 kernel: audit: initializing netlink subsys (disabled) Oct 13 00:26:08.036057 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Oct 13 00:26:08.036062 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 13 00:26:08.036067 kernel: cpuidle: using governor menu Oct 13 00:26:08.036072 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 13 00:26:08.036076 kernel: ASID allocator initialised with 32768 entries Oct 13 00:26:08.036081 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 13 00:26:08.036086 kernel: Serial: AMBA PL011 UART driver Oct 13 00:26:08.036091 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 13 00:26:08.036096 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 13 00:26:08.036100 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 13 00:26:08.038154 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 13 00:26:08.038162 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 13 00:26:08.038168 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 13 00:26:08.038173 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 13 00:26:08.038178 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 13 00:26:08.038183 kernel: ACPI: Added _OSI(Module Device) Oct 13 00:26:08.038187 kernel: ACPI: Added _OSI(Processor Device) Oct 13 00:26:08.038192 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 13 00:26:08.038197 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 13 00:26:08.038203 kernel: ACPI: Interpreter enabled Oct 13 00:26:08.038208 kernel: ACPI: Using GIC for interrupt routing Oct 13 00:26:08.038213 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Oct 13 00:26:08.038218 kernel: printk: legacy console [ttyAMA0] enabled Oct 13 00:26:08.038223 kernel: printk: legacy bootconsole [pl11] disabled Oct 13 00:26:08.038228 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Oct 13 00:26:08.038233 kernel: ACPI: CPU0 has been hot-added Oct 13 00:26:08.038237 kernel: ACPI: CPU1 has been hot-added Oct 13 00:26:08.038242 kernel: iommu: Default domain type: Translated Oct 13 00:26:08.038247 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 13 00:26:08.038253 kernel: efivars: Registered efivars operations Oct 13 00:26:08.038257 kernel: vgaarb: loaded Oct 13 00:26:08.038262 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 13 00:26:08.038267 kernel: VFS: Disk quotas dquot_6.6.0 Oct 13 00:26:08.038272 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 13 00:26:08.038277 kernel: pnp: PnP ACPI init Oct 13 00:26:08.038282 kernel: pnp: PnP ACPI: found 0 devices Oct 13 00:26:08.038287 kernel: NET: Registered PF_INET protocol family Oct 13 00:26:08.038291 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 13 00:26:08.038298 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 13 00:26:08.038302 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 13 00:26:08.038307 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 13 00:26:08.038312 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 13 00:26:08.038317 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 13 00:26:08.038322 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 00:26:08.038327 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 00:26:08.038332 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 13 00:26:08.038337 kernel: PCI: CLS 0 bytes, default 64 Oct 13 00:26:08.038342 kernel: kvm [1]: HYP mode not available Oct 13 00:26:08.038347 kernel: Initialise system trusted keyrings Oct 13 00:26:08.038352 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 13 00:26:08.038356 kernel: Key type asymmetric registered Oct 13 00:26:08.038361 kernel: Asymmetric key parser 'x509' registered Oct 13 00:26:08.038366 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 13 00:26:08.038371 kernel: io scheduler mq-deadline registered Oct 13 00:26:08.038376 kernel: io scheduler kyber registered Oct 13 00:26:08.038381 kernel: io scheduler bfq registered Oct 13 00:26:08.038386 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 13 00:26:08.038391 kernel: thunder_xcv, ver 1.0 Oct 13 00:26:08.038396 kernel: thunder_bgx, ver 1.0 Oct 13 00:26:08.038400 kernel: nicpf, ver 1.0 Oct 13 00:26:08.038405 kernel: nicvf, ver 1.0 Oct 13 00:26:08.038519 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 13 00:26:08.038571 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-10-13T00:26:07 UTC (1760315167) Oct 13 00:26:08.038579 kernel: efifb: probing for efifb Oct 13 00:26:08.038584 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Oct 13 00:26:08.038589 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Oct 13 00:26:08.038594 kernel: efifb: scrolling: redraw Oct 13 00:26:08.038599 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 13 00:26:08.038604 kernel: Console: switching to colour frame buffer device 128x48 Oct 13 00:26:08.038608 kernel: fb0: EFI VGA frame buffer device Oct 13 00:26:08.038613 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Oct 13 00:26:08.038618 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 13 00:26:08.038623 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Oct 13 00:26:08.038629 kernel: NET: Registered PF_INET6 protocol family Oct 13 00:26:08.038634 kernel: watchdog: NMI not fully supported Oct 13 00:26:08.038638 kernel: watchdog: Hard watchdog permanently disabled Oct 13 00:26:08.038643 kernel: Segment Routing with IPv6 Oct 13 00:26:08.038648 kernel: In-situ OAM (IOAM) with IPv6 Oct 13 00:26:08.038653 kernel: NET: Registered PF_PACKET protocol family Oct 13 00:26:08.038658 kernel: Key type dns_resolver registered Oct 13 00:26:08.038662 kernel: registered taskstats version 1 Oct 13 00:26:08.038667 kernel: Loading compiled-in X.509 certificates Oct 13 00:26:08.038673 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.51-flatcar: b8447a1087a9e9c4d5b9d4c2f2bba5a69a74f139' Oct 13 00:26:08.038677 kernel: Demotion targets for Node 0: null Oct 13 00:26:08.038682 kernel: Key type .fscrypt registered Oct 13 00:26:08.038687 kernel: Key type fscrypt-provisioning registered Oct 13 00:26:08.038692 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 13 00:26:08.038697 kernel: ima: Allocated hash algorithm: sha1 Oct 13 00:26:08.038701 kernel: ima: No architecture policies found Oct 13 00:26:08.038706 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 13 00:26:08.038712 kernel: clk: Disabling unused clocks Oct 13 00:26:08.038717 kernel: PM: genpd: Disabling unused power domains Oct 13 00:26:08.038721 kernel: Warning: unable to open an initial console. Oct 13 00:26:08.038727 kernel: Freeing unused kernel memory: 38976K Oct 13 00:26:08.038731 kernel: Run /init as init process Oct 13 00:26:08.038736 kernel: with arguments: Oct 13 00:26:08.038741 kernel: /init Oct 13 00:26:08.038745 kernel: with environment: Oct 13 00:26:08.038750 kernel: HOME=/ Oct 13 00:26:08.038755 kernel: TERM=linux Oct 13 00:26:08.038760 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 13 00:26:08.038766 systemd[1]: Successfully made /usr/ read-only. Oct 13 00:26:08.038773 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 00:26:08.038778 systemd[1]: Detected virtualization microsoft. Oct 13 00:26:08.038783 systemd[1]: Detected architecture arm64. Oct 13 00:26:08.038788 systemd[1]: Running in initrd. Oct 13 00:26:08.038793 systemd[1]: No hostname configured, using default hostname. Oct 13 00:26:08.038799 systemd[1]: Hostname set to . Oct 13 00:26:08.038804 systemd[1]: Initializing machine ID from random generator. Oct 13 00:26:08.038810 systemd[1]: Queued start job for default target initrd.target. Oct 13 00:26:08.038815 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 00:26:08.038820 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 00:26:08.038826 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 13 00:26:08.038831 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 00:26:08.038836 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 13 00:26:08.038843 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 13 00:26:08.038849 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 13 00:26:08.038854 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 13 00:26:08.038860 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 00:26:08.038865 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 00:26:08.038870 systemd[1]: Reached target paths.target - Path Units. Oct 13 00:26:08.038875 systemd[1]: Reached target slices.target - Slice Units. Oct 13 00:26:08.038881 systemd[1]: Reached target swap.target - Swaps. Oct 13 00:26:08.038886 systemd[1]: Reached target timers.target - Timer Units. Oct 13 00:26:08.038891 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 00:26:08.038897 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 00:26:08.038902 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 13 00:26:08.038907 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 13 00:26:08.038912 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 00:26:08.038917 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 00:26:08.038924 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 00:26:08.038929 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 00:26:08.038934 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 13 00:26:08.038939 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 00:26:08.038944 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 13 00:26:08.038950 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 13 00:26:08.038955 systemd[1]: Starting systemd-fsck-usr.service... Oct 13 00:26:08.038960 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 00:26:08.038965 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 00:26:08.038983 systemd-journald[224]: Collecting audit messages is disabled. Oct 13 00:26:08.038996 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 00:26:08.039002 systemd-journald[224]: Journal started Oct 13 00:26:08.039017 systemd-journald[224]: Runtime Journal (/run/log/journal/0a51b1a740854a8d967189627433544c) is 8M, max 78.3M, 70.3M free. Oct 13 00:26:08.042453 systemd-modules-load[226]: Inserted module 'overlay' Oct 13 00:26:08.056576 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 00:26:08.057143 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 13 00:26:08.074574 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 13 00:26:08.074588 kernel: Bridge firewalling registered Oct 13 00:26:08.070515 systemd-modules-load[226]: Inserted module 'br_netfilter' Oct 13 00:26:08.078233 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 00:26:08.083785 systemd[1]: Finished systemd-fsck-usr.service. Oct 13 00:26:08.092386 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 00:26:08.099857 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 00:26:08.110100 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 13 00:26:08.132493 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 00:26:08.136799 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 00:26:08.158199 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 00:26:08.169195 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 00:26:08.174676 systemd-tmpfiles[255]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 13 00:26:08.175523 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 00:26:08.192989 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 00:26:08.202934 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 00:26:08.213750 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 13 00:26:08.228762 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 00:26:08.239225 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 00:26:08.247284 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=37fc523060a9b8894388e25ab0f082059dd744d472a2b8577211d4b3dd66a910 Oct 13 00:26:08.295126 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 00:26:08.301889 systemd-resolved[262]: Positive Trust Anchors: Oct 13 00:26:08.326566 kernel: SCSI subsystem initialized Oct 13 00:26:08.326584 kernel: Loading iSCSI transport class v2.0-870. Oct 13 00:26:08.301904 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 00:26:08.301925 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 00:26:08.303542 systemd-resolved[262]: Defaulting to hostname 'linux'. Oct 13 00:26:08.365378 kernel: iscsi: registered transport (tcp) Oct 13 00:26:08.309499 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 00:26:08.319454 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 00:26:08.378508 kernel: iscsi: registered transport (qla4xxx) Oct 13 00:26:08.378520 kernel: QLogic iSCSI HBA Driver Oct 13 00:26:08.390640 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 00:26:08.405206 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 00:26:08.410535 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 00:26:08.455741 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 13 00:26:08.461206 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 13 00:26:08.536118 kernel: raid6: neonx8 gen() 18569 MB/s Oct 13 00:26:08.550111 kernel: raid6: neonx4 gen() 18547 MB/s Oct 13 00:26:08.569113 kernel: raid6: neonx2 gen() 17085 MB/s Oct 13 00:26:08.589112 kernel: raid6: neonx1 gen() 15043 MB/s Oct 13 00:26:08.608110 kernel: raid6: int64x8 gen() 10562 MB/s Oct 13 00:26:08.627202 kernel: raid6: int64x4 gen() 10611 MB/s Oct 13 00:26:08.646195 kernel: raid6: int64x2 gen() 8992 MB/s Oct 13 00:26:08.668267 kernel: raid6: int64x1 gen() 7042 MB/s Oct 13 00:26:08.668317 kernel: raid6: using algorithm neonx8 gen() 18569 MB/s Oct 13 00:26:08.690157 kernel: raid6: .... xor() 14909 MB/s, rmw enabled Oct 13 00:26:08.690208 kernel: raid6: using neon recovery algorithm Oct 13 00:26:08.697982 kernel: xor: measuring software checksum speed Oct 13 00:26:08.697990 kernel: 8regs : 28655 MB/sec Oct 13 00:26:08.700466 kernel: 32regs : 28812 MB/sec Oct 13 00:26:08.702949 kernel: arm64_neon : 37611 MB/sec Oct 13 00:26:08.705854 kernel: xor: using function: arm64_neon (37611 MB/sec) Oct 13 00:26:08.744135 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 13 00:26:08.748858 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 13 00:26:08.758240 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 00:26:08.787370 systemd-udevd[473]: Using default interface naming scheme 'v255'. Oct 13 00:26:08.794763 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 00:26:08.806207 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 13 00:26:08.841898 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation Oct 13 00:26:08.864142 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 00:26:08.869852 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 00:26:08.913169 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 00:26:08.923826 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 13 00:26:08.989133 kernel: hv_vmbus: Vmbus version:5.3 Oct 13 00:26:08.989578 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 00:26:08.989675 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 00:26:09.007168 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 00:26:09.023305 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 00:26:08.641922 kernel: hv_vmbus: registering driver hid_hyperv Oct 13 00:26:08.646046 kernel: hv_vmbus: registering driver hyperv_keyboard Oct 13 00:26:08.646057 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 13 00:26:08.646062 kernel: hv_vmbus: registering driver hv_netvsc Oct 13 00:26:08.646067 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 13 00:26:08.646074 kernel: PTP clock support registered Oct 13 00:26:08.646080 kernel: hv_vmbus: registering driver hv_storvsc Oct 13 00:26:08.646085 kernel: hv_utils: Registering HyperV Utility Driver Oct 13 00:26:08.646090 kernel: hv_vmbus: registering driver hv_utils Oct 13 00:26:08.646095 kernel: scsi host1: storvsc_host_t Oct 13 00:26:08.646185 kernel: hv_utils: Heartbeat IC version 3.0 Oct 13 00:26:08.646192 kernel: hv_utils: Shutdown IC version 3.2 Oct 13 00:26:08.646197 kernel: hv_utils: TimeSync IC version 4.0 Oct 13 00:26:08.646202 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Oct 13 00:26:08.646209 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Oct 13 00:26:08.646269 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Oct 13 00:26:08.646275 kernel: scsi host0: storvsc_host_t Oct 13 00:26:08.646333 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Oct 13 00:26:08.646399 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Oct 13 00:26:08.646462 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Oct 13 00:26:08.646519 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Oct 13 00:26:08.646578 kernel: sd 0:0:0:0: [sda] Write Protect is off Oct 13 00:26:08.646634 systemd-journald[224]: Time jumped backwards, rotating. Oct 13 00:26:08.646673 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Oct 13 00:26:08.646732 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Oct 13 00:26:08.575387 systemd-resolved[262]: Clock change detected. Flushing caches. Oct 13 00:26:08.669705 kernel: hv_netvsc 0022487d-cdd6-0022-487d-cdd60022487d eth0: VF slot 1 added Oct 13 00:26:08.669824 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Oct 13 00:26:08.669898 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#268 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Oct 13 00:26:08.636348 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 13 00:26:08.686412 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 13 00:26:08.686438 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Oct 13 00:26:08.688702 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 00:26:08.706344 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Oct 13 00:26:08.706462 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 13 00:26:08.706469 kernel: hv_vmbus: registering driver hv_pci Oct 13 00:26:08.706474 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Oct 13 00:26:08.706539 kernel: hv_pci a1566f27-bd27-4333-8e08-449df07733e8: PCI VMBus probing: Using version 0x10004 Oct 13 00:26:08.721455 kernel: hv_pci a1566f27-bd27-4333-8e08-449df07733e8: PCI host bridge to bus bd27:00 Oct 13 00:26:08.721566 kernel: pci_bus bd27:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Oct 13 00:26:08.721642 kernel: pci_bus bd27:00: No busn resource found for root bus, will use [bus 00-ff] Oct 13 00:26:08.731278 kernel: pci bd27:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Oct 13 00:26:08.737973 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#305 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Oct 13 00:26:08.742992 kernel: pci bd27:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Oct 13 00:26:08.747014 kernel: pci bd27:00:02.0: enabling Extended Tags Oct 13 00:26:08.760968 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Oct 13 00:26:08.771019 kernel: pci bd27:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at bd27:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Oct 13 00:26:08.780188 kernel: pci_bus bd27:00: busn_res: [bus 00-ff] end is updated to 00 Oct 13 00:26:08.780315 kernel: pci bd27:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Oct 13 00:26:08.844868 kernel: mlx5_core bd27:00:02.0: enabling device (0000 -> 0002) Oct 13 00:26:08.852757 kernel: mlx5_core bd27:00:02.0: PTM is not supported by PCIe Oct 13 00:26:08.852897 kernel: mlx5_core bd27:00:02.0: firmware version: 16.30.5006 Oct 13 00:26:09.019476 kernel: hv_netvsc 0022487d-cdd6-0022-487d-cdd60022487d eth0: VF registering: eth1 Oct 13 00:26:09.019665 kernel: mlx5_core bd27:00:02.0 eth1: joined to eth0 Oct 13 00:26:09.025003 kernel: mlx5_core bd27:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Oct 13 00:26:09.033510 kernel: mlx5_core bd27:00:02.0 enP48423s1: renamed from eth1 Oct 13 00:26:09.257095 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Oct 13 00:26:09.280628 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Oct 13 00:26:09.303814 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Oct 13 00:26:09.338190 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Oct 13 00:26:09.344087 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Oct 13 00:26:09.356271 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 13 00:26:09.366711 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 00:26:09.375792 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 00:26:09.386217 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 00:26:09.396731 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 13 00:26:09.425404 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 13 00:26:09.448147 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#315 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Oct 13 00:26:09.451408 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 13 00:26:09.468293 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 13 00:26:10.482268 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Oct 13 00:26:10.497341 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 13 00:26:10.497379 disk-uuid[662]: The operation has completed successfully. Oct 13 00:26:10.568505 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 13 00:26:10.572154 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 13 00:26:10.595189 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 13 00:26:10.619019 sh[820]: Success Oct 13 00:26:10.651298 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 13 00:26:10.651339 kernel: device-mapper: uevent: version 1.0.3 Oct 13 00:26:10.656205 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 13 00:26:10.664958 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Oct 13 00:26:10.958317 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 13 00:26:10.971336 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 13 00:26:10.979024 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 13 00:26:11.005724 kernel: BTRFS: device fsid e4495086-3456-43e0-be7b-4c3c53a67174 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (838) Oct 13 00:26:11.005769 kernel: BTRFS info (device dm-0): first mount of filesystem e4495086-3456-43e0-be7b-4c3c53a67174 Oct 13 00:26:11.005785 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 13 00:26:11.329394 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 13 00:26:11.329473 kernel: BTRFS info (device dm-0): enabling free space tree Oct 13 00:26:11.387305 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 13 00:26:11.391383 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 13 00:26:11.399042 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 13 00:26:11.399636 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 13 00:26:11.420526 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 13 00:26:11.457363 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (877) Oct 13 00:26:11.457403 kernel: BTRFS info (device sda6): first mount of filesystem 51f6bef3-5c80-492f-be85-d924f50fa726 Oct 13 00:26:11.462533 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 13 00:26:11.498805 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 00:26:11.517029 kernel: BTRFS info (device sda6): turning on async discard Oct 13 00:26:11.517055 kernel: BTRFS info (device sda6): enabling free space tree Oct 13 00:26:11.517494 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 00:26:11.534304 kernel: BTRFS info (device sda6): last unmount of filesystem 51f6bef3-5c80-492f-be85-d924f50fa726 Oct 13 00:26:11.539061 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 13 00:26:11.543997 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 13 00:26:11.568337 systemd-networkd[1003]: lo: Link UP Oct 13 00:26:11.568347 systemd-networkd[1003]: lo: Gained carrier Oct 13 00:26:11.569075 systemd-networkd[1003]: Enumeration completed Oct 13 00:26:11.571028 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 00:26:11.575695 systemd[1]: Reached target network.target - Network. Oct 13 00:26:11.578412 systemd-networkd[1003]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 00:26:11.578416 systemd-networkd[1003]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 00:26:11.643960 kernel: mlx5_core bd27:00:02.0 enP48423s1: Link up Oct 13 00:26:11.674950 kernel: hv_netvsc 0022487d-cdd6-0022-487d-cdd60022487d eth0: Data path switched to VF: enP48423s1 Oct 13 00:26:11.675424 systemd-networkd[1003]: enP48423s1: Link UP Oct 13 00:26:11.675482 systemd-networkd[1003]: eth0: Link UP Oct 13 00:26:11.675583 systemd-networkd[1003]: eth0: Gained carrier Oct 13 00:26:11.675596 systemd-networkd[1003]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 00:26:11.695090 systemd-networkd[1003]: enP48423s1: Gained carrier Oct 13 00:26:11.705974 systemd-networkd[1003]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Oct 13 00:26:12.575443 ignition[1008]: Ignition 2.22.0 Oct 13 00:26:12.575457 ignition[1008]: Stage: fetch-offline Oct 13 00:26:12.577444 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 00:26:12.575616 ignition[1008]: no configs at "/usr/lib/ignition/base.d" Oct 13 00:26:12.584883 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 13 00:26:12.575625 ignition[1008]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 00:26:12.575695 ignition[1008]: parsed url from cmdline: "" Oct 13 00:26:12.575697 ignition[1008]: no config URL provided Oct 13 00:26:12.575701 ignition[1008]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 00:26:12.575707 ignition[1008]: no config at "/usr/lib/ignition/user.ign" Oct 13 00:26:12.575710 ignition[1008]: failed to fetch config: resource requires networking Oct 13 00:26:12.575825 ignition[1008]: Ignition finished successfully Oct 13 00:26:12.628826 ignition[1017]: Ignition 2.22.0 Oct 13 00:26:12.628836 ignition[1017]: Stage: fetch Oct 13 00:26:12.629018 ignition[1017]: no configs at "/usr/lib/ignition/base.d" Oct 13 00:26:12.629026 ignition[1017]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 00:26:12.629089 ignition[1017]: parsed url from cmdline: "" Oct 13 00:26:12.629095 ignition[1017]: no config URL provided Oct 13 00:26:12.629098 ignition[1017]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 00:26:12.629104 ignition[1017]: no config at "/usr/lib/ignition/user.ign" Oct 13 00:26:12.629118 ignition[1017]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Oct 13 00:26:12.697500 ignition[1017]: GET result: OK Oct 13 00:26:12.697557 ignition[1017]: config has been read from IMDS userdata Oct 13 00:26:12.697579 ignition[1017]: parsing config with SHA512: 56efd73c9b693c2104da0748cf96e0d0cba0d1723b7fc11988c9a33c4a1107eb0b89544918799e7d7172c7e6d9bb6f4b8ca6d762dd3aff454aceabcf0b90a180 Oct 13 00:26:12.700097 unknown[1017]: fetched base config from "system" Oct 13 00:26:12.700316 ignition[1017]: fetch: fetch complete Oct 13 00:26:12.700108 unknown[1017]: fetched base config from "system" Oct 13 00:26:12.700319 ignition[1017]: fetch: fetch passed Oct 13 00:26:12.700112 unknown[1017]: fetched user config from "azure" Oct 13 00:26:12.700357 ignition[1017]: Ignition finished successfully Oct 13 00:26:12.702359 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 13 00:26:12.708330 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 13 00:26:12.751319 ignition[1024]: Ignition 2.22.0 Oct 13 00:26:12.751330 ignition[1024]: Stage: kargs Oct 13 00:26:12.751476 ignition[1024]: no configs at "/usr/lib/ignition/base.d" Oct 13 00:26:12.757149 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 13 00:26:12.751483 ignition[1024]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 00:26:12.765137 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 13 00:26:12.755389 ignition[1024]: kargs: kargs passed Oct 13 00:26:12.755432 ignition[1024]: Ignition finished successfully Oct 13 00:26:12.794378 ignition[1030]: Ignition 2.22.0 Oct 13 00:26:12.794390 ignition[1030]: Stage: disks Oct 13 00:26:12.799105 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 13 00:26:12.794532 ignition[1030]: no configs at "/usr/lib/ignition/base.d" Oct 13 00:26:12.804043 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 13 00:26:12.794538 ignition[1030]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 00:26:12.812625 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 13 00:26:12.795053 ignition[1030]: disks: disks passed Oct 13 00:26:12.821029 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 00:26:12.795085 ignition[1030]: Ignition finished successfully Oct 13 00:26:12.829801 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 00:26:12.838267 systemd[1]: Reached target basic.target - Basic System. Oct 13 00:26:12.847392 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 13 00:26:12.922320 systemd-fsck[1038]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Oct 13 00:26:12.930139 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 13 00:26:12.936391 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 13 00:26:13.086212 systemd-networkd[1003]: eth0: Gained IPv6LL Oct 13 00:26:14.843206 kernel: EXT4-fs (sda9): mounted filesystem 1aa1d0b4-cbac-4728-b9e0-662fa574e9ad r/w with ordered data mode. Quota mode: none. Oct 13 00:26:14.843813 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 13 00:26:14.847687 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 13 00:26:14.881814 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 00:26:14.898241 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 13 00:26:14.902955 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 13 00:26:14.922101 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 13 00:26:14.933684 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1052) Oct 13 00:26:14.922162 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 00:26:14.938603 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 13 00:26:14.959858 kernel: BTRFS info (device sda6): first mount of filesystem 51f6bef3-5c80-492f-be85-d924f50fa726 Oct 13 00:26:14.959875 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 13 00:26:14.960828 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 13 00:26:14.979808 kernel: BTRFS info (device sda6): turning on async discard Oct 13 00:26:14.979836 kernel: BTRFS info (device sda6): enabling free space tree Oct 13 00:26:14.982191 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 00:26:15.434254 coreos-metadata[1054]: Oct 13 00:26:15.434 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Oct 13 00:26:15.442993 coreos-metadata[1054]: Oct 13 00:26:15.442 INFO Fetch successful Oct 13 00:26:15.447173 coreos-metadata[1054]: Oct 13 00:26:15.443 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Oct 13 00:26:15.455594 coreos-metadata[1054]: Oct 13 00:26:15.455 INFO Fetch successful Oct 13 00:26:15.471395 coreos-metadata[1054]: Oct 13 00:26:15.471 INFO wrote hostname ci-4459.1.0-a-27183f81a1 to /sysroot/etc/hostname Oct 13 00:26:15.479308 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 13 00:26:15.713228 initrd-setup-root[1083]: cut: /sysroot/etc/passwd: No such file or directory Oct 13 00:26:15.774490 initrd-setup-root[1090]: cut: /sysroot/etc/group: No such file or directory Oct 13 00:26:15.794272 initrd-setup-root[1097]: cut: /sysroot/etc/shadow: No such file or directory Oct 13 00:26:15.799316 initrd-setup-root[1104]: cut: /sysroot/etc/gshadow: No such file or directory Oct 13 00:26:16.823455 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 13 00:26:16.829144 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 13 00:26:16.851490 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 13 00:26:16.866045 kernel: BTRFS info (device sda6): last unmount of filesystem 51f6bef3-5c80-492f-be85-d924f50fa726 Oct 13 00:26:16.856784 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 13 00:26:16.888373 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 13 00:26:16.895524 ignition[1172]: INFO : Ignition 2.22.0 Oct 13 00:26:16.895524 ignition[1172]: INFO : Stage: mount Oct 13 00:26:16.895524 ignition[1172]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 00:26:16.895524 ignition[1172]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 00:26:16.895524 ignition[1172]: INFO : mount: mount passed Oct 13 00:26:16.895524 ignition[1172]: INFO : Ignition finished successfully Oct 13 00:26:16.895569 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 13 00:26:16.900736 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 13 00:26:16.919038 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 00:26:16.951555 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1183) Oct 13 00:26:16.951571 kernel: BTRFS info (device sda6): first mount of filesystem 51f6bef3-5c80-492f-be85-d924f50fa726 Oct 13 00:26:16.956004 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 13 00:26:16.965331 kernel: BTRFS info (device sda6): turning on async discard Oct 13 00:26:16.965353 kernel: BTRFS info (device sda6): enabling free space tree Oct 13 00:26:16.966833 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 00:26:16.996036 ignition[1201]: INFO : Ignition 2.22.0 Oct 13 00:26:16.996036 ignition[1201]: INFO : Stage: files Oct 13 00:26:17.002312 ignition[1201]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 00:26:17.002312 ignition[1201]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 00:26:17.002312 ignition[1201]: DEBUG : files: compiled without relabeling support, skipping Oct 13 00:26:17.025211 ignition[1201]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 13 00:26:17.025211 ignition[1201]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 13 00:26:17.081331 ignition[1201]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 13 00:26:17.087312 ignition[1201]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 13 00:26:17.087312 ignition[1201]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 13 00:26:17.081677 unknown[1201]: wrote ssh authorized keys file for user: core Oct 13 00:26:17.124131 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Oct 13 00:26:17.132641 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Oct 13 00:26:17.162987 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 13 00:26:17.320478 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Oct 13 00:26:17.328217 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 13 00:26:17.328217 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 13 00:26:17.328217 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 13 00:26:17.328217 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 13 00:26:17.328217 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 00:26:17.328217 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 00:26:17.328217 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 00:26:17.328217 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 00:26:17.383767 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 00:26:17.383767 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 00:26:17.383767 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 13 00:26:17.383767 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 13 00:26:17.383767 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 13 00:26:17.383767 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Oct 13 00:26:17.847421 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 13 00:26:18.081375 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 13 00:26:18.081375 ignition[1201]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 13 00:26:18.115131 ignition[1201]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 00:26:18.128125 ignition[1201]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 00:26:18.128125 ignition[1201]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 13 00:26:18.157495 ignition[1201]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 13 00:26:18.157495 ignition[1201]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 13 00:26:18.157495 ignition[1201]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 13 00:26:18.157495 ignition[1201]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 13 00:26:18.157495 ignition[1201]: INFO : files: files passed Oct 13 00:26:18.157495 ignition[1201]: INFO : Ignition finished successfully Oct 13 00:26:18.154186 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 13 00:26:18.162931 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 13 00:26:18.185355 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 13 00:26:18.207871 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 00:26:18.238822 initrd-setup-root-after-ignition[1229]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 00:26:18.238822 initrd-setup-root-after-ignition[1229]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 13 00:26:18.213361 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 13 00:26:18.257995 initrd-setup-root-after-ignition[1233]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 00:26:18.213429 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 13 00:26:18.222880 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 13 00:26:18.232439 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 13 00:26:18.274192 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 13 00:26:18.274273 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 13 00:26:18.283083 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 13 00:26:18.291595 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 13 00:26:18.303820 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 13 00:26:18.304478 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 13 00:26:18.332497 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 00:26:18.337955 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 13 00:26:18.372197 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 13 00:26:18.377013 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 00:26:18.386147 systemd[1]: Stopped target timers.target - Timer Units. Oct 13 00:26:18.394353 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 13 00:26:18.394444 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 00:26:18.406242 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 13 00:26:18.410373 systemd[1]: Stopped target basic.target - Basic System. Oct 13 00:26:18.419122 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 13 00:26:18.427165 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 00:26:18.435389 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 13 00:26:18.444004 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 13 00:26:18.452927 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 13 00:26:18.461103 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 00:26:18.470224 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 13 00:26:18.477982 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 13 00:26:18.486742 systemd[1]: Stopped target swap.target - Swaps. Oct 13 00:26:18.493906 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 13 00:26:18.494062 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 13 00:26:18.504425 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 13 00:26:18.512168 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 00:26:18.521434 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 13 00:26:18.521514 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 00:26:18.530513 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 13 00:26:18.530636 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 13 00:26:18.542876 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 13 00:26:18.543013 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 00:26:18.553064 systemd[1]: ignition-files.service: Deactivated successfully. Oct 13 00:26:18.553174 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 13 00:26:18.560145 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 13 00:26:18.613262 ignition[1254]: INFO : Ignition 2.22.0 Oct 13 00:26:18.613262 ignition[1254]: INFO : Stage: umount Oct 13 00:26:18.613262 ignition[1254]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 00:26:18.613262 ignition[1254]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 13 00:26:18.613262 ignition[1254]: INFO : umount: umount passed Oct 13 00:26:18.613262 ignition[1254]: INFO : Ignition finished successfully Oct 13 00:26:18.560252 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 13 00:26:18.571038 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 13 00:26:18.579212 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 13 00:26:18.579410 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 00:26:18.596449 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 13 00:26:18.603149 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 13 00:26:18.603246 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 00:26:18.612147 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 13 00:26:18.612252 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 00:26:18.623594 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 13 00:26:18.623677 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 13 00:26:18.632344 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 13 00:26:18.633961 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 13 00:26:18.639093 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 13 00:26:18.639128 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 13 00:26:18.647892 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 13 00:26:18.647924 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 13 00:26:18.654103 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 13 00:26:18.654129 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 13 00:26:18.661369 systemd[1]: Stopped target network.target - Network. Oct 13 00:26:18.669890 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 13 00:26:18.669931 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 00:26:18.678562 systemd[1]: Stopped target paths.target - Path Units. Oct 13 00:26:18.685782 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 13 00:26:18.688953 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 00:26:18.697147 systemd[1]: Stopped target slices.target - Slice Units. Oct 13 00:26:18.704828 systemd[1]: Stopped target sockets.target - Socket Units. Oct 13 00:26:18.713038 systemd[1]: iscsid.socket: Deactivated successfully. Oct 13 00:26:18.713079 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 00:26:18.717214 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 13 00:26:18.717262 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 00:26:18.725594 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 13 00:26:18.725640 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 13 00:26:18.733130 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 13 00:26:18.733159 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 13 00:26:18.742025 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 13 00:26:18.749227 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 13 00:26:18.757556 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 13 00:26:18.758064 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 13 00:26:18.758127 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 13 00:26:18.766641 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 13 00:26:18.959766 kernel: hv_netvsc 0022487d-cdd6-0022-487d-cdd60022487d eth0: Data path switched from VF: enP48423s1 Oct 13 00:26:18.766715 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 13 00:26:18.784488 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Oct 13 00:26:18.784671 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 13 00:26:18.784757 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 13 00:26:18.795427 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Oct 13 00:26:18.796486 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 13 00:26:18.803553 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 13 00:26:18.803583 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 13 00:26:18.811712 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 13 00:26:18.811763 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 13 00:26:18.820287 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 13 00:26:18.834086 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 13 00:26:18.834135 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 00:26:18.842465 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 13 00:26:18.842494 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 13 00:26:18.859897 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 13 00:26:18.859934 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 13 00:26:18.864745 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 13 00:26:18.864778 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 00:26:18.876158 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 00:26:18.887250 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 13 00:26:18.887299 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Oct 13 00:26:18.899931 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 13 00:26:18.900091 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 00:26:18.911329 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 13 00:26:18.911406 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 13 00:26:18.919530 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 13 00:26:18.919555 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 00:26:18.927959 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 13 00:26:18.927995 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 13 00:26:18.939568 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 13 00:26:18.939609 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 13 00:26:18.956781 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 13 00:26:18.956854 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 00:26:18.974073 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 13 00:26:18.988453 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 13 00:26:18.988510 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 00:26:19.177578 systemd-journald[224]: Received SIGTERM from PID 1 (systemd). Oct 13 00:26:19.002536 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 13 00:26:19.002576 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 00:26:19.011380 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 00:26:19.013009 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 00:26:19.026904 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Oct 13 00:26:19.026956 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 13 00:26:19.026982 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 13 00:26:19.027236 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 13 00:26:19.027311 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 13 00:26:19.043917 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 13 00:26:19.044049 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 13 00:26:19.053164 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 13 00:26:19.061639 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 13 00:26:19.095270 systemd[1]: Switching root. Oct 13 00:26:19.239405 systemd-journald[224]: Journal stopped Oct 13 00:26:26.480411 kernel: SELinux: policy capability network_peer_controls=1 Oct 13 00:26:26.480431 kernel: SELinux: policy capability open_perms=1 Oct 13 00:26:26.480438 kernel: SELinux: policy capability extended_socket_class=1 Oct 13 00:26:26.480444 kernel: SELinux: policy capability always_check_network=0 Oct 13 00:26:26.480449 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 13 00:26:26.480455 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 13 00:26:26.480461 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 13 00:26:26.480466 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 13 00:26:26.480471 kernel: SELinux: policy capability userspace_initial_context=0 Oct 13 00:26:26.480476 kernel: audit: type=1403 audit(1760315180.380:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 13 00:26:26.480483 systemd[1]: Successfully loaded SELinux policy in 214.341ms. Oct 13 00:26:26.480491 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.299ms. Oct 13 00:26:26.480497 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 00:26:26.480503 systemd[1]: Detected virtualization microsoft. Oct 13 00:26:26.480509 systemd[1]: Detected architecture arm64. Oct 13 00:26:26.480515 systemd[1]: Detected first boot. Oct 13 00:26:26.480522 systemd[1]: Hostname set to . Oct 13 00:26:26.480528 systemd[1]: Initializing machine ID from random generator. Oct 13 00:26:26.480534 zram_generator::config[1297]: No configuration found. Oct 13 00:26:26.480540 kernel: NET: Registered PF_VSOCK protocol family Oct 13 00:26:26.480546 systemd[1]: Populated /etc with preset unit settings. Oct 13 00:26:26.480552 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Oct 13 00:26:26.480559 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 13 00:26:26.480566 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 13 00:26:26.480572 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 13 00:26:26.480577 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 13 00:26:26.480584 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 13 00:26:26.480590 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 13 00:26:26.480596 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 13 00:26:26.480602 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 13 00:26:26.480609 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 13 00:26:26.480615 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 13 00:26:26.480621 systemd[1]: Created slice user.slice - User and Session Slice. Oct 13 00:26:26.480627 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 00:26:26.480633 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 00:26:26.480639 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 13 00:26:26.480645 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 13 00:26:26.480651 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 13 00:26:26.480658 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 00:26:26.480664 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 13 00:26:26.480671 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 00:26:26.480677 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 00:26:26.480683 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 13 00:26:26.480690 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 13 00:26:26.480696 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 13 00:26:26.480702 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 13 00:26:26.480709 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 00:26:26.480715 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 00:26:26.480721 systemd[1]: Reached target slices.target - Slice Units. Oct 13 00:26:26.480727 systemd[1]: Reached target swap.target - Swaps. Oct 13 00:26:26.480733 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 13 00:26:26.480739 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 13 00:26:26.480747 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 13 00:26:26.480753 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 00:26:26.480759 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 00:26:26.480766 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 00:26:26.480772 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 13 00:26:26.480778 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 13 00:26:26.480784 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 13 00:26:26.480791 systemd[1]: Mounting media.mount - External Media Directory... Oct 13 00:26:26.480797 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 13 00:26:26.480804 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 13 00:26:26.480810 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 13 00:26:26.480816 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 13 00:26:26.480843 systemd[1]: Reached target machines.target - Containers. Oct 13 00:26:26.480849 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 13 00:26:26.480855 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 00:26:26.480862 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 00:26:26.480869 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 13 00:26:26.480875 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 00:26:26.480881 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 00:26:26.480887 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 00:26:26.480893 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 13 00:26:26.480899 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 00:26:26.480906 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 13 00:26:26.480912 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 13 00:26:26.480919 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 13 00:26:26.480925 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 13 00:26:26.480931 systemd[1]: Stopped systemd-fsck-usr.service. Oct 13 00:26:26.480950 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 00:26:26.480956 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 00:26:26.480962 kernel: fuse: init (API version 7.41) Oct 13 00:26:26.480968 kernel: loop: module loaded Oct 13 00:26:26.480973 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 00:26:26.480980 kernel: ACPI: bus type drm_connector registered Oct 13 00:26:26.480987 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 00:26:26.480993 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 13 00:26:26.481000 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 13 00:26:26.481006 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 00:26:26.481012 systemd[1]: verity-setup.service: Deactivated successfully. Oct 13 00:26:26.481030 systemd-journald[1398]: Collecting audit messages is disabled. Oct 13 00:26:26.481045 systemd[1]: Stopped verity-setup.service. Oct 13 00:26:26.481052 systemd-journald[1398]: Journal started Oct 13 00:26:26.481068 systemd-journald[1398]: Runtime Journal (/run/log/journal/36a828f215b04b8096fdaa7a83dd3983) is 8M, max 78.3M, 70.3M free. Oct 13 00:26:25.691570 systemd[1]: Queued start job for default target multi-user.target. Oct 13 00:26:25.699457 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Oct 13 00:26:25.699847 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 13 00:26:25.700148 systemd[1]: systemd-journald.service: Consumed 2.333s CPU time. Oct 13 00:26:26.496070 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 00:26:26.497112 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 13 00:26:26.501851 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 13 00:26:26.508017 systemd[1]: Mounted media.mount - External Media Directory. Oct 13 00:26:26.512057 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 13 00:26:26.516748 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 13 00:26:26.521444 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 13 00:26:26.526020 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 13 00:26:26.531309 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 00:26:26.536406 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 13 00:26:26.536541 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 13 00:26:26.541367 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 00:26:26.541509 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 00:26:26.546522 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 00:26:26.546653 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 00:26:26.551303 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 00:26:26.551421 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 00:26:26.556429 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 13 00:26:26.556558 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 13 00:26:26.561204 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 00:26:26.561345 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 00:26:26.566041 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 00:26:26.570877 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 00:26:26.575895 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 13 00:26:26.581421 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 13 00:26:26.594676 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 00:26:26.601129 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 13 00:26:26.608036 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 13 00:26:26.612978 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 13 00:26:26.613009 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 00:26:26.617779 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 13 00:26:26.623755 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 13 00:26:26.627964 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 00:26:26.628826 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 13 00:26:26.633989 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 13 00:26:26.638669 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 00:26:26.639483 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 13 00:26:26.644111 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 00:26:26.645036 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 00:26:26.651900 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 13 00:26:26.658817 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 13 00:26:26.664669 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 00:26:26.670570 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 13 00:26:26.678331 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 13 00:26:26.688412 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 13 00:26:26.693549 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 13 00:26:26.699199 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 13 00:26:26.719317 systemd-journald[1398]: Time spent on flushing to /var/log/journal/36a828f215b04b8096fdaa7a83dd3983 is 43.485ms for 937 entries. Oct 13 00:26:26.719317 systemd-journald[1398]: System Journal (/var/log/journal/36a828f215b04b8096fdaa7a83dd3983) is 11.8M, max 2.6G, 2.6G free. Oct 13 00:26:26.876748 systemd-journald[1398]: Received client request to flush runtime journal. Oct 13 00:26:26.876801 kernel: loop0: detected capacity change from 0 to 100632 Oct 13 00:26:26.876821 systemd-journald[1398]: /var/log/journal/36a828f215b04b8096fdaa7a83dd3983/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Oct 13 00:26:26.876848 systemd-journald[1398]: Rotating system journal. Oct 13 00:26:26.788333 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 00:26:26.878263 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 13 00:26:26.886278 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 13 00:26:26.886960 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 13 00:26:27.287025 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 13 00:26:27.345352 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 13 00:26:27.352061 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 00:26:27.361985 kernel: loop1: detected capacity change from 0 to 119368 Oct 13 00:26:27.408690 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 13 00:26:27.499612 systemd-tmpfiles[1455]: ACLs are not supported, ignoring. Oct 13 00:26:27.499630 systemd-tmpfiles[1455]: ACLs are not supported, ignoring. Oct 13 00:26:27.514717 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 00:26:27.521637 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 00:26:27.548039 systemd-udevd[1459]: Using default interface naming scheme 'v255'. Oct 13 00:26:27.785976 kernel: loop2: detected capacity change from 0 to 207008 Oct 13 00:26:27.826964 kernel: loop3: detected capacity change from 0 to 27936 Oct 13 00:26:28.275089 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 00:26:28.284055 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 00:26:28.326790 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 13 00:26:28.371961 kernel: loop4: detected capacity change from 0 to 100632 Oct 13 00:26:28.380764 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 13 00:26:28.389970 kernel: loop5: detected capacity change from 0 to 119368 Oct 13 00:26:28.404139 kernel: loop6: detected capacity change from 0 to 207008 Oct 13 00:26:28.423959 kernel: loop7: detected capacity change from 0 to 27936 Oct 13 00:26:28.428693 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 13 00:26:28.431953 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#26 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Oct 13 00:26:28.438273 (sd-merge)[1495]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Oct 13 00:26:28.438861 (sd-merge)[1495]: Merged extensions into '/usr'. Oct 13 00:26:28.444116 systemd[1]: Reload requested from client PID 1435 ('systemd-sysext') (unit systemd-sysext.service)... Oct 13 00:26:28.444130 systemd[1]: Reloading... Oct 13 00:26:28.506958 kernel: mousedev: PS/2 mouse device common for all mice Oct 13 00:26:28.507033 zram_generator::config[1539]: No configuration found. Oct 13 00:26:28.553986 kernel: hv_vmbus: registering driver hv_balloon Oct 13 00:26:28.565471 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Oct 13 00:26:28.565543 kernel: hv_balloon: Memory hot add disabled on ARM64 Oct 13 00:26:28.602614 kernel: hv_vmbus: registering driver hyperv_fb Oct 13 00:26:28.602686 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Oct 13 00:26:28.609350 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Oct 13 00:26:28.613663 kernel: Console: switching to colour dummy device 80x25 Oct 13 00:26:28.620759 kernel: Console: switching to colour frame buffer device 128x48 Oct 13 00:26:28.662734 systemd-networkd[1476]: lo: Link UP Oct 13 00:26:28.662744 systemd-networkd[1476]: lo: Gained carrier Oct 13 00:26:28.664976 systemd-networkd[1476]: Enumeration completed Oct 13 00:26:28.668399 systemd-networkd[1476]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 00:26:28.668407 systemd-networkd[1476]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 00:26:28.722977 kernel: mlx5_core bd27:00:02.0 enP48423s1: Link up Oct 13 00:26:28.744960 kernel: hv_netvsc 0022487d-cdd6-0022-487d-cdd60022487d eth0: Data path switched to VF: enP48423s1 Oct 13 00:26:28.746268 systemd-networkd[1476]: enP48423s1: Link UP Oct 13 00:26:28.746389 systemd-networkd[1476]: eth0: Link UP Oct 13 00:26:28.746392 systemd-networkd[1476]: eth0: Gained carrier Oct 13 00:26:28.746412 systemd-networkd[1476]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 00:26:28.751124 systemd-networkd[1476]: enP48423s1: Gained carrier Oct 13 00:26:28.756974 systemd-networkd[1476]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Oct 13 00:26:28.801112 systemd[1]: Reloading finished in 356 ms. Oct 13 00:26:28.815793 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 00:26:28.821767 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 13 00:26:28.829983 kernel: MACsec IEEE 802.1AE Oct 13 00:26:28.850452 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Oct 13 00:26:28.871159 systemd[1]: Starting ensure-sysext.service... Oct 13 00:26:28.874754 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 13 00:26:28.885117 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 13 00:26:28.892320 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 13 00:26:28.900563 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 00:26:28.909530 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 00:26:28.923107 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 13 00:26:28.923363 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 13 00:26:28.923614 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 13 00:26:28.923834 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 13 00:26:28.924334 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 13 00:26:28.924581 systemd-tmpfiles[1677]: ACLs are not supported, ignoring. Oct 13 00:26:28.924683 systemd-tmpfiles[1677]: ACLs are not supported, ignoring. Oct 13 00:26:28.925518 systemd[1]: Reload requested from client PID 1673 ('systemctl') (unit ensure-sysext.service)... Oct 13 00:26:28.925531 systemd[1]: Reloading... Oct 13 00:26:28.966851 systemd-tmpfiles[1677]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 00:26:28.966859 systemd-tmpfiles[1677]: Skipping /boot Oct 13 00:26:28.975996 zram_generator::config[1714]: No configuration found. Oct 13 00:26:28.977729 systemd-tmpfiles[1677]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 00:26:28.977739 systemd-tmpfiles[1677]: Skipping /boot Oct 13 00:26:29.128728 systemd[1]: Reloading finished in 202 ms. Oct 13 00:26:29.150338 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 13 00:26:29.157463 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 13 00:26:29.163637 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 00:26:29.175934 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 00:26:29.196163 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 13 00:26:29.203572 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 13 00:26:29.215547 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 00:26:29.223886 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 13 00:26:29.230559 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 00:26:29.234148 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 00:26:29.241175 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 00:26:29.246917 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 00:26:29.252538 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 00:26:29.252629 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 00:26:29.254522 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 00:26:29.254655 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 00:26:29.260675 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 00:26:29.260805 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 00:26:29.271658 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 00:26:29.273417 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 00:26:29.282283 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 00:26:29.287685 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 00:26:29.287808 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 00:26:29.288424 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 00:26:29.289283 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 00:26:29.295182 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 00:26:29.295302 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 00:26:29.301216 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 00:26:29.301685 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 00:26:29.313247 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 13 00:26:29.322209 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 00:26:29.323120 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 00:26:29.329952 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 00:26:29.337675 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 00:26:29.345050 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 00:26:29.350060 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 00:26:29.350098 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 00:26:29.350136 systemd[1]: Reached target time-set.target - System Time Set. Oct 13 00:26:29.355157 systemd-resolved[1778]: Positive Trust Anchors: Oct 13 00:26:29.355304 systemd[1]: Finished ensure-sysext.service. Oct 13 00:26:29.355405 systemd-resolved[1778]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 00:26:29.355471 systemd-resolved[1778]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 00:26:29.359395 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 00:26:29.359525 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 00:26:29.365508 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 00:26:29.365651 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 00:26:29.371197 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 00:26:29.371318 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 00:26:29.377483 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 00:26:29.377611 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 00:26:29.385484 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 00:26:29.385554 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 00:26:29.416629 augenrules[1819]: No rules Oct 13 00:26:29.417796 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 00:26:29.418010 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 00:26:29.419886 systemd-resolved[1778]: Using system hostname 'ci-4459.1.0-a-27183f81a1'. Oct 13 00:26:29.437868 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 00:26:29.442988 systemd[1]: Reached target network.target - Network. Oct 13 00:26:29.446680 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 00:26:29.602703 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 13 00:26:30.188925 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 00:26:30.238053 systemd-networkd[1476]: eth0: Gained IPv6LL Oct 13 00:26:30.240639 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 13 00:26:30.246875 systemd[1]: Reached target network-online.target - Network is Online. Oct 13 00:26:31.633650 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 13 00:26:31.639259 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 13 00:26:34.423970 ldconfig[1430]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 13 00:26:34.436358 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 13 00:26:34.442562 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 13 00:26:34.466003 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 13 00:26:34.470995 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 00:26:34.475458 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 13 00:26:34.480295 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 13 00:26:34.485270 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 13 00:26:34.489483 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 13 00:26:34.494554 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 13 00:26:34.499415 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 13 00:26:34.499438 systemd[1]: Reached target paths.target - Path Units. Oct 13 00:26:34.502905 systemd[1]: Reached target timers.target - Timer Units. Oct 13 00:26:34.531157 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 13 00:26:34.537045 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 13 00:26:34.542343 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 13 00:26:34.547486 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 13 00:26:34.552671 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 13 00:26:34.567484 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 13 00:26:34.583596 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 13 00:26:34.588887 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 13 00:26:34.593465 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 00:26:34.597453 systemd[1]: Reached target basic.target - Basic System. Oct 13 00:26:34.601163 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 13 00:26:34.601187 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 13 00:26:34.634522 systemd[1]: Starting chronyd.service - NTP client/server... Oct 13 00:26:34.650043 systemd[1]: Starting containerd.service - containerd container runtime... Oct 13 00:26:34.655402 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 13 00:26:34.661387 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 13 00:26:34.666731 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 13 00:26:34.674021 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 13 00:26:34.679112 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 13 00:26:34.683755 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 13 00:26:34.686043 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Oct 13 00:26:34.690327 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Oct 13 00:26:34.692203 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:26:34.697833 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 13 00:26:34.701642 jq[1844]: false Oct 13 00:26:34.704901 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 13 00:26:34.710748 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 13 00:26:34.712164 KVP[1846]: KVP starting; pid is:1846 Oct 13 00:26:34.716958 kernel: hv_utils: KVP IC version 4.0 Oct 13 00:26:34.716973 KVP[1846]: KVP LIC Version: 3.1 Oct 13 00:26:34.717328 chronyd[1836]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Oct 13 00:26:34.719322 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 13 00:26:34.728530 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 13 00:26:34.736341 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 13 00:26:34.742551 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 13 00:26:34.747158 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 13 00:26:34.747776 systemd[1]: Starting update-engine.service - Update Engine... Oct 13 00:26:34.751929 extend-filesystems[1845]: Found /dev/sda6 Oct 13 00:26:34.754541 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 13 00:26:34.768599 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 13 00:26:34.771797 extend-filesystems[1845]: Found /dev/sda9 Oct 13 00:26:34.781623 extend-filesystems[1845]: Checking size of /dev/sda9 Oct 13 00:26:34.780702 chronyd[1836]: Timezone right/UTC failed leap second check, ignoring Oct 13 00:26:34.777728 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 13 00:26:34.789823 jq[1865]: true Oct 13 00:26:34.780852 chronyd[1836]: Loaded seccomp filter (level 2) Oct 13 00:26:34.778393 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 13 00:26:34.785200 systemd[1]: Started chronyd.service - NTP client/server. Oct 13 00:26:34.790777 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 13 00:26:34.793340 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 13 00:26:34.801789 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 13 00:26:34.806541 systemd[1]: motdgen.service: Deactivated successfully. Oct 13 00:26:34.806771 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 13 00:26:34.823745 (ntainerd)[1881]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 13 00:26:34.827968 jq[1880]: true Oct 13 00:26:34.828443 extend-filesystems[1845]: Old size kept for /dev/sda9 Oct 13 00:26:34.843114 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 13 00:26:34.843292 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 13 00:26:34.851373 update_engine[1863]: I20251013 00:26:34.850723 1863 main.cc:92] Flatcar Update Engine starting Oct 13 00:26:34.854529 systemd-logind[1859]: New seat seat0. Oct 13 00:26:34.856843 systemd-logind[1859]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Oct 13 00:26:34.857004 systemd[1]: Started systemd-logind.service - User Login Management. Oct 13 00:26:34.867397 tar[1871]: linux-arm64/LICENSE Oct 13 00:26:34.867587 tar[1871]: linux-arm64/helm Oct 13 00:26:34.943988 bash[1922]: Updated "/home/core/.ssh/authorized_keys" Oct 13 00:26:34.946491 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 13 00:26:34.960381 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 13 00:26:35.060984 sshd_keygen[1879]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 13 00:26:35.102261 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 13 00:26:35.111111 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 13 00:26:35.121449 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Oct 13 00:26:35.144738 dbus-daemon[1839]: [system] SELinux support is enabled Oct 13 00:26:35.140261 systemd[1]: issuegen.service: Deactivated successfully. Oct 13 00:26:35.141014 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 13 00:26:35.148127 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 13 00:26:35.154066 update_engine[1863]: I20251013 00:26:35.149169 1863 update_check_scheduler.cc:74] Next update check in 4m41s Oct 13 00:26:35.159257 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 13 00:26:35.159335 dbus-daemon[1839]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 13 00:26:35.159288 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 13 00:26:35.167756 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 13 00:26:35.174215 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 13 00:26:35.174240 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 13 00:26:35.183246 systemd[1]: Started update-engine.service - Update Engine. Oct 13 00:26:35.191709 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Oct 13 00:26:35.201675 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 13 00:26:35.208068 coreos-metadata[1838]: Oct 13 00:26:35.208 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Oct 13 00:26:35.213281 coreos-metadata[1838]: Oct 13 00:26:35.213 INFO Fetch successful Oct 13 00:26:35.213455 coreos-metadata[1838]: Oct 13 00:26:35.213 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Oct 13 00:26:35.214122 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 13 00:26:35.220620 coreos-metadata[1838]: Oct 13 00:26:35.220 INFO Fetch successful Oct 13 00:26:35.220685 coreos-metadata[1838]: Oct 13 00:26:35.220 INFO Fetching http://168.63.129.16/machine/a4e64a53-c8e2-44ee-9312-8e183722f73b/fd39b1ae%2D120c%2D4c39%2D9649%2De9f719972719.%5Fci%2D4459.1.0%2Da%2D27183f81a1?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Oct 13 00:26:35.221531 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 13 00:26:35.228256 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 13 00:26:35.233281 coreos-metadata[1838]: Oct 13 00:26:35.232 INFO Fetch successful Oct 13 00:26:35.233281 coreos-metadata[1838]: Oct 13 00:26:35.233 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Oct 13 00:26:35.237127 systemd[1]: Reached target getty.target - Login Prompts. Oct 13 00:26:35.243421 coreos-metadata[1838]: Oct 13 00:26:35.243 INFO Fetch successful Oct 13 00:26:35.272318 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 13 00:26:35.282650 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 13 00:26:35.292298 tar[1871]: linux-arm64/README.md Oct 13 00:26:35.307766 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 13 00:26:35.421540 containerd[1881]: time="2025-10-13T00:26:35Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 13 00:26:35.423846 containerd[1881]: time="2025-10-13T00:26:35.423707308Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 13 00:26:35.432178 containerd[1881]: time="2025-10-13T00:26:35.431454332Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.608µs" Oct 13 00:26:35.432178 containerd[1881]: time="2025-10-13T00:26:35.431485164Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 13 00:26:35.432178 containerd[1881]: time="2025-10-13T00:26:35.431503372Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 13 00:26:35.432178 containerd[1881]: time="2025-10-13T00:26:35.431625180Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 13 00:26:35.432178 containerd[1881]: time="2025-10-13T00:26:35.431635436Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 13 00:26:35.432178 containerd[1881]: time="2025-10-13T00:26:35.431651868Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 00:26:35.432178 containerd[1881]: time="2025-10-13T00:26:35.431693068Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 00:26:35.432178 containerd[1881]: time="2025-10-13T00:26:35.431699668Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 00:26:35.432178 containerd[1881]: time="2025-10-13T00:26:35.431873884Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 00:26:35.432178 containerd[1881]: time="2025-10-13T00:26:35.431882916Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 00:26:35.432178 containerd[1881]: time="2025-10-13T00:26:35.431891132Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 00:26:35.432178 containerd[1881]: time="2025-10-13T00:26:35.431896452Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 13 00:26:35.432443 containerd[1881]: time="2025-10-13T00:26:35.432423204Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 13 00:26:35.432777 containerd[1881]: time="2025-10-13T00:26:35.432755540Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 00:26:35.432855 containerd[1881]: time="2025-10-13T00:26:35.432843220Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 00:26:35.433218 containerd[1881]: time="2025-10-13T00:26:35.433198740Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 13 00:26:35.433314 containerd[1881]: time="2025-10-13T00:26:35.433302548Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 13 00:26:35.433541 containerd[1881]: time="2025-10-13T00:26:35.433527140Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 13 00:26:35.433729 containerd[1881]: time="2025-10-13T00:26:35.433714300Z" level=info msg="metadata content store policy set" policy=shared Oct 13 00:26:35.447796 containerd[1881]: time="2025-10-13T00:26:35.447754636Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 13 00:26:35.449246 containerd[1881]: time="2025-10-13T00:26:35.447914988Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 13 00:26:35.449246 containerd[1881]: time="2025-10-13T00:26:35.447932172Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 13 00:26:35.449246 containerd[1881]: time="2025-10-13T00:26:35.447953404Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 13 00:26:35.449246 containerd[1881]: time="2025-10-13T00:26:35.447969172Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 13 00:26:35.449246 containerd[1881]: time="2025-10-13T00:26:35.447981196Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 13 00:26:35.449246 containerd[1881]: time="2025-10-13T00:26:35.447996980Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 13 00:26:35.449246 containerd[1881]: time="2025-10-13T00:26:35.448009924Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 13 00:26:35.449246 containerd[1881]: time="2025-10-13T00:26:35.448018300Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 13 00:26:35.449246 containerd[1881]: time="2025-10-13T00:26:35.448024956Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 13 00:26:35.449246 containerd[1881]: time="2025-10-13T00:26:35.448031844Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 13 00:26:35.449246 containerd[1881]: time="2025-10-13T00:26:35.448040788Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 13 00:26:35.449246 containerd[1881]: time="2025-10-13T00:26:35.448166892Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 13 00:26:35.449246 containerd[1881]: time="2025-10-13T00:26:35.448181436Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 13 00:26:35.449246 containerd[1881]: time="2025-10-13T00:26:35.448192540Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 13 00:26:35.449453 containerd[1881]: time="2025-10-13T00:26:35.448200940Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 13 00:26:35.449453 containerd[1881]: time="2025-10-13T00:26:35.448208556Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 13 00:26:35.449453 containerd[1881]: time="2025-10-13T00:26:35.448215820Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 13 00:26:35.449453 containerd[1881]: time="2025-10-13T00:26:35.448222708Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 13 00:26:35.449453 containerd[1881]: time="2025-10-13T00:26:35.448228868Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 13 00:26:35.449453 containerd[1881]: time="2025-10-13T00:26:35.448236404Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 13 00:26:35.449453 containerd[1881]: time="2025-10-13T00:26:35.448242420Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 13 00:26:35.449453 containerd[1881]: time="2025-10-13T00:26:35.448249380Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 13 00:26:35.449453 containerd[1881]: time="2025-10-13T00:26:35.448307628Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 13 00:26:35.449453 containerd[1881]: time="2025-10-13T00:26:35.448318188Z" level=info msg="Start snapshots syncer" Oct 13 00:26:35.449453 containerd[1881]: time="2025-10-13T00:26:35.448333764Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 13 00:26:35.449590 containerd[1881]: time="2025-10-13T00:26:35.448488828Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 13 00:26:35.449590 containerd[1881]: time="2025-10-13T00:26:35.448527028Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 13 00:26:35.449664 containerd[1881]: time="2025-10-13T00:26:35.448598428Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 13 00:26:35.449664 containerd[1881]: time="2025-10-13T00:26:35.448687524Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 13 00:26:35.449664 containerd[1881]: time="2025-10-13T00:26:35.448700492Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 13 00:26:35.449664 containerd[1881]: time="2025-10-13T00:26:35.448708332Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 13 00:26:35.449664 containerd[1881]: time="2025-10-13T00:26:35.448715940Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 13 00:26:35.449664 containerd[1881]: time="2025-10-13T00:26:35.448729228Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 13 00:26:35.449664 containerd[1881]: time="2025-10-13T00:26:35.448736276Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 13 00:26:35.449664 containerd[1881]: time="2025-10-13T00:26:35.448742892Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 13 00:26:35.449664 containerd[1881]: time="2025-10-13T00:26:35.448767516Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 13 00:26:35.449664 containerd[1881]: time="2025-10-13T00:26:35.448776508Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 13 00:26:35.449664 containerd[1881]: time="2025-10-13T00:26:35.448795668Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 13 00:26:35.449664 containerd[1881]: time="2025-10-13T00:26:35.448816612Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 00:26:35.449664 containerd[1881]: time="2025-10-13T00:26:35.448826228Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 00:26:35.449664 containerd[1881]: time="2025-10-13T00:26:35.448831692Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 00:26:35.449824 containerd[1881]: time="2025-10-13T00:26:35.448837780Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 00:26:35.449824 containerd[1881]: time="2025-10-13T00:26:35.448842252Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 13 00:26:35.449824 containerd[1881]: time="2025-10-13T00:26:35.448847740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 13 00:26:35.449824 containerd[1881]: time="2025-10-13T00:26:35.448854604Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 13 00:26:35.449824 containerd[1881]: time="2025-10-13T00:26:35.448867028Z" level=info msg="runtime interface created" Oct 13 00:26:35.449824 containerd[1881]: time="2025-10-13T00:26:35.448870524Z" level=info msg="created NRI interface" Oct 13 00:26:35.449824 containerd[1881]: time="2025-10-13T00:26:35.448875764Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 13 00:26:35.449824 containerd[1881]: time="2025-10-13T00:26:35.448883508Z" level=info msg="Connect containerd service" Oct 13 00:26:35.449824 containerd[1881]: time="2025-10-13T00:26:35.448901404Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 13 00:26:35.451998 containerd[1881]: time="2025-10-13T00:26:35.451973548Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 00:26:35.472629 locksmithd[2005]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 13 00:26:35.616725 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:26:35.755338 (kubelet)[2036]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 00:26:36.107046 kubelet[2036]: E1013 00:26:36.107003 2036 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 00:26:36.109099 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 00:26:36.109206 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 00:26:36.109709 systemd[1]: kubelet.service: Consumed 540ms CPU time, 254.1M memory peak. Oct 13 00:26:36.135578 containerd[1881]: time="2025-10-13T00:26:36.135183636Z" level=info msg="Start subscribing containerd event" Oct 13 00:26:36.135578 containerd[1881]: time="2025-10-13T00:26:36.135243756Z" level=info msg="Start recovering state" Oct 13 00:26:36.135578 containerd[1881]: time="2025-10-13T00:26:36.135323764Z" level=info msg="Start event monitor" Oct 13 00:26:36.135578 containerd[1881]: time="2025-10-13T00:26:36.135333436Z" level=info msg="Start cni network conf syncer for default" Oct 13 00:26:36.135578 containerd[1881]: time="2025-10-13T00:26:36.135333732Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 13 00:26:36.135578 containerd[1881]: time="2025-10-13T00:26:36.135342532Z" level=info msg="Start streaming server" Oct 13 00:26:36.135578 containerd[1881]: time="2025-10-13T00:26:36.135363732Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 13 00:26:36.135578 containerd[1881]: time="2025-10-13T00:26:36.135368668Z" level=info msg="runtime interface starting up..." Oct 13 00:26:36.135578 containerd[1881]: time="2025-10-13T00:26:36.135372356Z" level=info msg="starting plugins..." Oct 13 00:26:36.135578 containerd[1881]: time="2025-10-13T00:26:36.135378052Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 13 00:26:36.135578 containerd[1881]: time="2025-10-13T00:26:36.135383996Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 13 00:26:36.135578 containerd[1881]: time="2025-10-13T00:26:36.135486052Z" level=info msg="containerd successfully booted in 0.714257s" Oct 13 00:26:36.137061 systemd[1]: Started containerd.service - containerd container runtime. Oct 13 00:26:36.142372 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 13 00:26:36.149272 systemd[1]: Startup finished in 1.649s (kernel) + 13.016s (initrd) + 15.981s (userspace) = 30.647s. Oct 13 00:26:36.698377 login[2007]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:26:36.700382 login[2008]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:26:36.704248 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 13 00:26:36.705199 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 13 00:26:36.712594 systemd-logind[1859]: New session 1 of user core. Oct 13 00:26:36.715092 systemd-logind[1859]: New session 2 of user core. Oct 13 00:26:36.734502 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 13 00:26:36.736617 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 13 00:26:36.761313 (systemd)[2056]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 13 00:26:36.763156 systemd-logind[1859]: New session c1 of user core. Oct 13 00:26:37.013412 systemd[2056]: Queued start job for default target default.target. Oct 13 00:26:37.025649 systemd[2056]: Created slice app.slice - User Application Slice. Oct 13 00:26:37.025993 systemd[2056]: Reached target paths.target - Paths. Oct 13 00:26:37.026034 systemd[2056]: Reached target timers.target - Timers. Oct 13 00:26:37.027116 systemd[2056]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 13 00:26:37.035061 systemd[2056]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 13 00:26:37.035190 systemd[2056]: Reached target sockets.target - Sockets. Oct 13 00:26:37.035295 systemd[2056]: Reached target basic.target - Basic System. Oct 13 00:26:37.035392 systemd[2056]: Reached target default.target - Main User Target. Oct 13 00:26:37.035408 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 13 00:26:37.035856 systemd[2056]: Startup finished in 267ms. Oct 13 00:26:37.036257 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 13 00:26:37.036707 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 13 00:26:37.100360 waagent[2000]: 2025-10-13T00:26:37.096211Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Oct 13 00:26:37.100661 waagent[2000]: 2025-10-13T00:26:37.100508Z INFO Daemon Daemon OS: flatcar 4459.1.0 Oct 13 00:26:37.103861 waagent[2000]: 2025-10-13T00:26:37.103827Z INFO Daemon Daemon Python: 3.11.13 Oct 13 00:26:37.108952 waagent[2000]: 2025-10-13T00:26:37.107058Z INFO Daemon Daemon Run daemon Oct 13 00:26:37.110234 waagent[2000]: 2025-10-13T00:26:37.110118Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.1.0' Oct 13 00:26:37.116820 waagent[2000]: 2025-10-13T00:26:37.116782Z INFO Daemon Daemon Using waagent for provisioning Oct 13 00:26:37.120459 waagent[2000]: 2025-10-13T00:26:37.120425Z INFO Daemon Daemon Activate resource disk Oct 13 00:26:37.124308 waagent[2000]: 2025-10-13T00:26:37.124280Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Oct 13 00:26:37.134245 waagent[2000]: 2025-10-13T00:26:37.134163Z INFO Daemon Daemon Found device: None Oct 13 00:26:37.137872 waagent[2000]: 2025-10-13T00:26:37.137828Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Oct 13 00:26:37.144597 waagent[2000]: 2025-10-13T00:26:37.144564Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Oct 13 00:26:37.153486 waagent[2000]: 2025-10-13T00:26:37.153442Z INFO Daemon Daemon Clean protocol and wireserver endpoint Oct 13 00:26:37.158420 waagent[2000]: 2025-10-13T00:26:37.158388Z INFO Daemon Daemon Running default provisioning handler Oct 13 00:26:37.167004 waagent[2000]: 2025-10-13T00:26:37.166936Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Oct 13 00:26:37.180281 waagent[2000]: 2025-10-13T00:26:37.180224Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Oct 13 00:26:37.187960 waagent[2000]: 2025-10-13T00:26:37.187788Z INFO Daemon Daemon cloud-init is enabled: False Oct 13 00:26:37.191903 waagent[2000]: 2025-10-13T00:26:37.191864Z INFO Daemon Daemon Copying ovf-env.xml Oct 13 00:26:37.305268 waagent[2000]: 2025-10-13T00:26:37.305187Z INFO Daemon Daemon Successfully mounted dvd Oct 13 00:26:37.331888 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Oct 13 00:26:37.334076 waagent[2000]: 2025-10-13T00:26:37.334018Z INFO Daemon Daemon Detect protocol endpoint Oct 13 00:26:37.337790 waagent[2000]: 2025-10-13T00:26:37.337736Z INFO Daemon Daemon Clean protocol and wireserver endpoint Oct 13 00:26:37.342178 waagent[2000]: 2025-10-13T00:26:37.342135Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Oct 13 00:26:37.347367 waagent[2000]: 2025-10-13T00:26:37.347337Z INFO Daemon Daemon Test for route to 168.63.129.16 Oct 13 00:26:37.351684 waagent[2000]: 2025-10-13T00:26:37.351638Z INFO Daemon Daemon Route to 168.63.129.16 exists Oct 13 00:26:37.355798 waagent[2000]: 2025-10-13T00:26:37.355766Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Oct 13 00:26:37.396967 waagent[2000]: 2025-10-13T00:26:37.396347Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Oct 13 00:26:37.401035 waagent[2000]: 2025-10-13T00:26:37.401011Z INFO Daemon Daemon Wire protocol version:2012-11-30 Oct 13 00:26:37.404961 waagent[2000]: 2025-10-13T00:26:37.404929Z INFO Daemon Daemon Server preferred version:2015-04-05 Oct 13 00:26:37.479983 waagent[2000]: 2025-10-13T00:26:37.479485Z INFO Daemon Daemon Initializing goal state during protocol detection Oct 13 00:26:37.484406 waagent[2000]: 2025-10-13T00:26:37.484365Z INFO Daemon Daemon Forcing an update of the goal state. Oct 13 00:26:37.492540 waagent[2000]: 2025-10-13T00:26:37.492501Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Oct 13 00:26:37.510828 waagent[2000]: 2025-10-13T00:26:37.510795Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Oct 13 00:26:37.515078 waagent[2000]: 2025-10-13T00:26:37.515045Z INFO Daemon Oct 13 00:26:37.517083 waagent[2000]: 2025-10-13T00:26:37.517054Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 59a5fa55-1616-408b-8d92-0db79eeb6e1d eTag: 1811156294897899123 source: Fabric] Oct 13 00:26:37.524777 waagent[2000]: 2025-10-13T00:26:37.524742Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Oct 13 00:26:37.529602 waagent[2000]: 2025-10-13T00:26:37.529569Z INFO Daemon Oct 13 00:26:37.531674 waagent[2000]: 2025-10-13T00:26:37.531647Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Oct 13 00:26:37.540465 waagent[2000]: 2025-10-13T00:26:37.540437Z INFO Daemon Daemon Downloading artifacts profile blob Oct 13 00:26:37.602877 waagent[2000]: 2025-10-13T00:26:37.602763Z INFO Daemon Downloaded certificate {'thumbprint': '34CD39A293CA51B2729F99497028D732FCC60152', 'hasPrivateKey': True} Oct 13 00:26:37.610260 waagent[2000]: 2025-10-13T00:26:37.610212Z INFO Daemon Fetch goal state completed Oct 13 00:26:37.619910 waagent[2000]: 2025-10-13T00:26:37.619846Z INFO Daemon Daemon Starting provisioning Oct 13 00:26:37.623745 waagent[2000]: 2025-10-13T00:26:37.623708Z INFO Daemon Daemon Handle ovf-env.xml. Oct 13 00:26:37.627097 waagent[2000]: 2025-10-13T00:26:37.627074Z INFO Daemon Daemon Set hostname [ci-4459.1.0-a-27183f81a1] Oct 13 00:26:37.666345 waagent[2000]: 2025-10-13T00:26:37.666295Z INFO Daemon Daemon Publish hostname [ci-4459.1.0-a-27183f81a1] Oct 13 00:26:37.671184 waagent[2000]: 2025-10-13T00:26:37.671147Z INFO Daemon Daemon Examine /proc/net/route for primary interface Oct 13 00:26:37.675879 waagent[2000]: 2025-10-13T00:26:37.675833Z INFO Daemon Daemon Primary interface is [eth0] Oct 13 00:26:37.685810 systemd-networkd[1476]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 00:26:37.685816 systemd-networkd[1476]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 00:26:37.685847 systemd-networkd[1476]: eth0: DHCP lease lost Oct 13 00:26:37.686600 waagent[2000]: 2025-10-13T00:26:37.686441Z INFO Daemon Daemon Create user account if not exists Oct 13 00:26:37.690571 waagent[2000]: 2025-10-13T00:26:37.690534Z INFO Daemon Daemon User core already exists, skip useradd Oct 13 00:26:37.695101 waagent[2000]: 2025-10-13T00:26:37.695060Z INFO Daemon Daemon Configure sudoer Oct 13 00:26:37.703062 waagent[2000]: 2025-10-13T00:26:37.703007Z INFO Daemon Daemon Configure sshd Oct 13 00:26:37.713719 waagent[2000]: 2025-10-13T00:26:37.713628Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Oct 13 00:26:37.722658 waagent[2000]: 2025-10-13T00:26:37.722582Z INFO Daemon Daemon Deploy ssh public key. Oct 13 00:26:37.733853 systemd-networkd[1476]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Oct 13 00:26:38.871961 waagent[2000]: 2025-10-13T00:26:38.870514Z INFO Daemon Daemon Provisioning complete Oct 13 00:26:38.883992 waagent[2000]: 2025-10-13T00:26:38.883958Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Oct 13 00:26:38.888836 waagent[2000]: 2025-10-13T00:26:38.888807Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Oct 13 00:26:38.895952 waagent[2000]: 2025-10-13T00:26:38.895920Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Oct 13 00:26:38.992466 waagent[2105]: 2025-10-13T00:26:38.992400Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Oct 13 00:26:38.993867 waagent[2105]: 2025-10-13T00:26:38.992843Z INFO ExtHandler ExtHandler OS: flatcar 4459.1.0 Oct 13 00:26:38.993867 waagent[2105]: 2025-10-13T00:26:38.992898Z INFO ExtHandler ExtHandler Python: 3.11.13 Oct 13 00:26:38.993867 waagent[2105]: 2025-10-13T00:26:38.992935Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Oct 13 00:26:39.056292 waagent[2105]: 2025-10-13T00:26:39.056225Z INFO ExtHandler ExtHandler Distro: flatcar-4459.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Oct 13 00:26:39.056616 waagent[2105]: 2025-10-13T00:26:39.056584Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 13 00:26:39.056747 waagent[2105]: 2025-10-13T00:26:39.056721Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 13 00:26:39.062759 waagent[2105]: 2025-10-13T00:26:39.062707Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Oct 13 00:26:39.067705 waagent[2105]: 2025-10-13T00:26:39.067672Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Oct 13 00:26:39.068171 waagent[2105]: 2025-10-13T00:26:39.068132Z INFO ExtHandler Oct 13 00:26:39.068313 waagent[2105]: 2025-10-13T00:26:39.068289Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 7baa5985-3a2b-4694-a233-174113bd4a3c eTag: 1811156294897899123 source: Fabric] Oct 13 00:26:39.068605 waagent[2105]: 2025-10-13T00:26:39.068575Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Oct 13 00:26:39.069129 waagent[2105]: 2025-10-13T00:26:39.069097Z INFO ExtHandler Oct 13 00:26:39.069233 waagent[2105]: 2025-10-13T00:26:39.069214Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Oct 13 00:26:39.072658 waagent[2105]: 2025-10-13T00:26:39.072627Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Oct 13 00:26:39.122557 waagent[2105]: 2025-10-13T00:26:39.122463Z INFO ExtHandler Downloaded certificate {'thumbprint': '34CD39A293CA51B2729F99497028D732FCC60152', 'hasPrivateKey': True} Oct 13 00:26:39.123086 waagent[2105]: 2025-10-13T00:26:39.123053Z INFO ExtHandler Fetch goal state completed Oct 13 00:26:39.135308 waagent[2105]: 2025-10-13T00:26:39.135276Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Oct 13 00:26:39.138978 waagent[2105]: 2025-10-13T00:26:39.138545Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2105 Oct 13 00:26:39.138978 waagent[2105]: 2025-10-13T00:26:39.138668Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Oct 13 00:26:39.138978 waagent[2105]: 2025-10-13T00:26:39.138892Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Oct 13 00:26:39.140016 waagent[2105]: 2025-10-13T00:26:39.139982Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.1.0', '', 'Flatcar Container Linux by Kinvolk'] Oct 13 00:26:39.140322 waagent[2105]: 2025-10-13T00:26:39.140292Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Oct 13 00:26:39.140431 waagent[2105]: 2025-10-13T00:26:39.140409Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Oct 13 00:26:39.140834 waagent[2105]: 2025-10-13T00:26:39.140804Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Oct 13 00:26:39.197825 waagent[2105]: 2025-10-13T00:26:39.197787Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Oct 13 00:26:39.198043 waagent[2105]: 2025-10-13T00:26:39.198014Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Oct 13 00:26:39.202860 waagent[2105]: 2025-10-13T00:26:39.202824Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Oct 13 00:26:39.207168 systemd[1]: Reload requested from client PID 2120 ('systemctl') (unit waagent.service)... Oct 13 00:26:39.207390 systemd[1]: Reloading... Oct 13 00:26:39.278982 zram_generator::config[2174]: No configuration found. Oct 13 00:26:39.419338 systemd[1]: Reloading finished in 211 ms. Oct 13 00:26:39.443967 waagent[2105]: 2025-10-13T00:26:39.443381Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Oct 13 00:26:39.443967 waagent[2105]: 2025-10-13T00:26:39.443521Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Oct 13 00:26:39.782984 waagent[2105]: 2025-10-13T00:26:39.782589Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Oct 13 00:26:39.782984 waagent[2105]: 2025-10-13T00:26:39.782885Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Oct 13 00:26:39.783820 waagent[2105]: 2025-10-13T00:26:39.783595Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 13 00:26:39.783820 waagent[2105]: 2025-10-13T00:26:39.783666Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 13 00:26:39.783820 waagent[2105]: 2025-10-13T00:26:39.783811Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Oct 13 00:26:39.784038 waagent[2105]: 2025-10-13T00:26:39.783982Z INFO ExtHandler ExtHandler Starting env monitor service. Oct 13 00:26:39.784084 waagent[2105]: 2025-10-13T00:26:39.784038Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Oct 13 00:26:39.784084 waagent[2105]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Oct 13 00:26:39.784084 waagent[2105]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Oct 13 00:26:39.784084 waagent[2105]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Oct 13 00:26:39.784084 waagent[2105]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Oct 13 00:26:39.784084 waagent[2105]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Oct 13 00:26:39.784084 waagent[2105]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Oct 13 00:26:39.784771 waagent[2105]: 2025-10-13T00:26:39.784468Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Oct 13 00:26:39.784771 waagent[2105]: 2025-10-13T00:26:39.784546Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 13 00:26:39.784771 waagent[2105]: 2025-10-13T00:26:39.784592Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 13 00:26:39.784771 waagent[2105]: 2025-10-13T00:26:39.784677Z INFO EnvHandler ExtHandler Configure routes Oct 13 00:26:39.784771 waagent[2105]: 2025-10-13T00:26:39.784712Z INFO EnvHandler ExtHandler Gateway:None Oct 13 00:26:39.784771 waagent[2105]: 2025-10-13T00:26:39.784735Z INFO EnvHandler ExtHandler Routes:None Oct 13 00:26:39.785221 waagent[2105]: 2025-10-13T00:26:39.785183Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Oct 13 00:26:39.785357 waagent[2105]: 2025-10-13T00:26:39.785321Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Oct 13 00:26:39.785779 waagent[2105]: 2025-10-13T00:26:39.785747Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Oct 13 00:26:39.785874 waagent[2105]: 2025-10-13T00:26:39.785839Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Oct 13 00:26:39.785973 waagent[2105]: 2025-10-13T00:26:39.785921Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Oct 13 00:26:39.791110 waagent[2105]: 2025-10-13T00:26:39.791078Z INFO ExtHandler ExtHandler Oct 13 00:26:39.791253 waagent[2105]: 2025-10-13T00:26:39.791227Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 0d24dcda-c8f1-4da8-bc6f-c6e5739f9959 correlation 513bf552-5c03-4847-8537-f8f7695076cb created: 2025-10-13T00:25:25.792847Z] Oct 13 00:26:39.791628 waagent[2105]: 2025-10-13T00:26:39.791599Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Oct 13 00:26:39.792109 waagent[2105]: 2025-10-13T00:26:39.792081Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Oct 13 00:26:40.335485 waagent[2105]: 2025-10-13T00:26:40.335429Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Oct 13 00:26:40.335485 waagent[2105]: Try `iptables -h' or 'iptables --help' for more information.) Oct 13 00:26:40.336205 waagent[2105]: 2025-10-13T00:26:40.336176Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 59942035-0117-46E3-9934-25316B18F90F;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Oct 13 00:26:40.396541 waagent[2105]: 2025-10-13T00:26:40.396481Z INFO MonitorHandler ExtHandler Network interfaces: Oct 13 00:26:40.396541 waagent[2105]: Executing ['ip', '-a', '-o', 'link']: Oct 13 00:26:40.396541 waagent[2105]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Oct 13 00:26:40.396541 waagent[2105]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7d:cd:d6 brd ff:ff:ff:ff:ff:ff Oct 13 00:26:40.396541 waagent[2105]: 3: enP48423s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7d:cd:d6 brd ff:ff:ff:ff:ff:ff\ altname enP48423p0s2 Oct 13 00:26:40.396541 waagent[2105]: Executing ['ip', '-4', '-a', '-o', 'address']: Oct 13 00:26:40.396541 waagent[2105]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Oct 13 00:26:40.396541 waagent[2105]: 2: eth0 inet 10.200.20.34/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Oct 13 00:26:40.396541 waagent[2105]: Executing ['ip', '-6', '-a', '-o', 'address']: Oct 13 00:26:40.396541 waagent[2105]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Oct 13 00:26:40.396541 waagent[2105]: 2: eth0 inet6 fe80::222:48ff:fe7d:cdd6/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Oct 13 00:26:40.453819 waagent[2105]: 2025-10-13T00:26:40.453208Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Oct 13 00:26:40.453819 waagent[2105]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Oct 13 00:26:40.453819 waagent[2105]: pkts bytes target prot opt in out source destination Oct 13 00:26:40.453819 waagent[2105]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Oct 13 00:26:40.453819 waagent[2105]: pkts bytes target prot opt in out source destination Oct 13 00:26:40.453819 waagent[2105]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Oct 13 00:26:40.453819 waagent[2105]: pkts bytes target prot opt in out source destination Oct 13 00:26:40.453819 waagent[2105]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Oct 13 00:26:40.453819 waagent[2105]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Oct 13 00:26:40.453819 waagent[2105]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Oct 13 00:26:40.455446 waagent[2105]: 2025-10-13T00:26:40.455412Z INFO EnvHandler ExtHandler Current Firewall rules: Oct 13 00:26:40.455446 waagent[2105]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Oct 13 00:26:40.455446 waagent[2105]: pkts bytes target prot opt in out source destination Oct 13 00:26:40.455446 waagent[2105]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Oct 13 00:26:40.455446 waagent[2105]: pkts bytes target prot opt in out source destination Oct 13 00:26:40.455446 waagent[2105]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Oct 13 00:26:40.455446 waagent[2105]: pkts bytes target prot opt in out source destination Oct 13 00:26:40.455446 waagent[2105]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Oct 13 00:26:40.455446 waagent[2105]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Oct 13 00:26:40.455446 waagent[2105]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Oct 13 00:26:40.455845 waagent[2105]: 2025-10-13T00:26:40.455823Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Oct 13 00:26:41.950553 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 13 00:26:41.951457 systemd[1]: Started sshd@0-10.200.20.34:22-10.200.16.10:38926.service - OpenSSH per-connection server daemon (10.200.16.10:38926). Oct 13 00:26:42.556931 sshd[2247]: Accepted publickey for core from 10.200.16.10 port 38926 ssh2: RSA SHA256:aubEDS8yZfNH2XbdzFIlpBCeXwKvWyi9x03sf6YxNU8 Oct 13 00:26:42.557916 sshd-session[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:26:42.561287 systemd-logind[1859]: New session 3 of user core. Oct 13 00:26:42.568190 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 13 00:26:42.983291 systemd[1]: Started sshd@1-10.200.20.34:22-10.200.16.10:38936.service - OpenSSH per-connection server daemon (10.200.16.10:38936). Oct 13 00:26:43.415399 sshd[2253]: Accepted publickey for core from 10.200.16.10 port 38936 ssh2: RSA SHA256:aubEDS8yZfNH2XbdzFIlpBCeXwKvWyi9x03sf6YxNU8 Oct 13 00:26:43.416428 sshd-session[2253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:26:43.419838 systemd-logind[1859]: New session 4 of user core. Oct 13 00:26:43.427213 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 13 00:26:43.738268 sshd[2256]: Connection closed by 10.200.16.10 port 38936 Oct 13 00:26:43.738763 sshd-session[2253]: pam_unix(sshd:session): session closed for user core Oct 13 00:26:43.742317 systemd[1]: sshd@1-10.200.20.34:22-10.200.16.10:38936.service: Deactivated successfully. Oct 13 00:26:43.743650 systemd[1]: session-4.scope: Deactivated successfully. Oct 13 00:26:43.744305 systemd-logind[1859]: Session 4 logged out. Waiting for processes to exit. Oct 13 00:26:43.745270 systemd-logind[1859]: Removed session 4. Oct 13 00:26:43.812607 systemd[1]: Started sshd@2-10.200.20.34:22-10.200.16.10:38952.service - OpenSSH per-connection server daemon (10.200.16.10:38952). Oct 13 00:26:44.232326 sshd[2262]: Accepted publickey for core from 10.200.16.10 port 38952 ssh2: RSA SHA256:aubEDS8yZfNH2XbdzFIlpBCeXwKvWyi9x03sf6YxNU8 Oct 13 00:26:44.233406 sshd-session[2262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:26:44.236998 systemd-logind[1859]: New session 5 of user core. Oct 13 00:26:44.248276 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 13 00:26:44.548640 sshd[2265]: Connection closed by 10.200.16.10 port 38952 Oct 13 00:26:44.549203 sshd-session[2262]: pam_unix(sshd:session): session closed for user core Oct 13 00:26:44.552611 systemd[1]: sshd@2-10.200.20.34:22-10.200.16.10:38952.service: Deactivated successfully. Oct 13 00:26:44.554276 systemd[1]: session-5.scope: Deactivated successfully. Oct 13 00:26:44.555702 systemd-logind[1859]: Session 5 logged out. Waiting for processes to exit. Oct 13 00:26:44.557701 systemd-logind[1859]: Removed session 5. Oct 13 00:26:44.627902 systemd[1]: Started sshd@3-10.200.20.34:22-10.200.16.10:38958.service - OpenSSH per-connection server daemon (10.200.16.10:38958). Oct 13 00:26:45.061393 sshd[2271]: Accepted publickey for core from 10.200.16.10 port 38958 ssh2: RSA SHA256:aubEDS8yZfNH2XbdzFIlpBCeXwKvWyi9x03sf6YxNU8 Oct 13 00:26:45.062425 sshd-session[2271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:26:45.065989 systemd-logind[1859]: New session 6 of user core. Oct 13 00:26:45.073161 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 13 00:26:45.388803 sshd[2274]: Connection closed by 10.200.16.10 port 38958 Oct 13 00:26:45.389318 sshd-session[2271]: pam_unix(sshd:session): session closed for user core Oct 13 00:26:45.393270 systemd[1]: sshd@3-10.200.20.34:22-10.200.16.10:38958.service: Deactivated successfully. Oct 13 00:26:45.394586 systemd[1]: session-6.scope: Deactivated successfully. Oct 13 00:26:45.396404 systemd-logind[1859]: Session 6 logged out. Waiting for processes to exit. Oct 13 00:26:45.397387 systemd-logind[1859]: Removed session 6. Oct 13 00:26:45.466390 systemd[1]: Started sshd@4-10.200.20.34:22-10.200.16.10:38964.service - OpenSSH per-connection server daemon (10.200.16.10:38964). Oct 13 00:26:45.894587 sshd[2280]: Accepted publickey for core from 10.200.16.10 port 38964 ssh2: RSA SHA256:aubEDS8yZfNH2XbdzFIlpBCeXwKvWyi9x03sf6YxNU8 Oct 13 00:26:45.895281 sshd-session[2280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:26:45.898919 systemd-logind[1859]: New session 7 of user core. Oct 13 00:26:45.906314 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 13 00:26:46.271165 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 13 00:26:46.273168 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:26:46.304442 sudo[2284]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 13 00:26:46.304665 sudo[2284]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 00:26:46.329371 sudo[2284]: pam_unix(sudo:session): session closed for user root Oct 13 00:26:46.414348 sshd[2283]: Connection closed by 10.200.16.10 port 38964 Oct 13 00:26:46.415312 sshd-session[2280]: pam_unix(sshd:session): session closed for user core Oct 13 00:26:46.418802 systemd-logind[1859]: Session 7 logged out. Waiting for processes to exit. Oct 13 00:26:46.419447 systemd[1]: sshd@4-10.200.20.34:22-10.200.16.10:38964.service: Deactivated successfully. Oct 13 00:26:46.420806 systemd[1]: session-7.scope: Deactivated successfully. Oct 13 00:26:46.423018 systemd-logind[1859]: Removed session 7. Oct 13 00:26:46.491169 systemd[1]: Started sshd@5-10.200.20.34:22-10.200.16.10:38972.service - OpenSSH per-connection server daemon (10.200.16.10:38972). Oct 13 00:26:46.512079 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:26:46.515178 (kubelet)[2300]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 00:26:46.545820 kubelet[2300]: E1013 00:26:46.545658 2300 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 00:26:46.548555 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 00:26:46.548763 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 00:26:46.549127 systemd[1]: kubelet.service: Consumed 110ms CPU time, 105.9M memory peak. Oct 13 00:26:46.917176 sshd[2295]: Accepted publickey for core from 10.200.16.10 port 38972 ssh2: RSA SHA256:aubEDS8yZfNH2XbdzFIlpBCeXwKvWyi9x03sf6YxNU8 Oct 13 00:26:46.918282 sshd-session[2295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:26:46.921986 systemd-logind[1859]: New session 8 of user core. Oct 13 00:26:46.931073 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 13 00:26:47.156042 sudo[2310]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 13 00:26:47.156242 sudo[2310]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 00:26:47.163049 sudo[2310]: pam_unix(sudo:session): session closed for user root Oct 13 00:26:47.166429 sudo[2309]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 13 00:26:47.166624 sudo[2309]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 00:26:47.173062 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 00:26:47.200726 augenrules[2332]: No rules Oct 13 00:26:47.201869 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 00:26:47.202173 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 00:26:47.203296 sudo[2309]: pam_unix(sudo:session): session closed for user root Oct 13 00:26:47.273980 sshd[2308]: Connection closed by 10.200.16.10 port 38972 Oct 13 00:26:47.274480 sshd-session[2295]: pam_unix(sshd:session): session closed for user core Oct 13 00:26:47.277728 systemd-logind[1859]: Session 8 logged out. Waiting for processes to exit. Oct 13 00:26:47.278007 systemd[1]: sshd@5-10.200.20.34:22-10.200.16.10:38972.service: Deactivated successfully. Oct 13 00:26:47.280148 systemd[1]: session-8.scope: Deactivated successfully. Oct 13 00:26:47.281602 systemd-logind[1859]: Removed session 8. Oct 13 00:26:47.354139 systemd[1]: Started sshd@6-10.200.20.34:22-10.200.16.10:38974.service - OpenSSH per-connection server daemon (10.200.16.10:38974). Oct 13 00:26:47.790627 sshd[2341]: Accepted publickey for core from 10.200.16.10 port 38974 ssh2: RSA SHA256:aubEDS8yZfNH2XbdzFIlpBCeXwKvWyi9x03sf6YxNU8 Oct 13 00:26:47.791679 sshd-session[2341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:26:47.795271 systemd-logind[1859]: New session 9 of user core. Oct 13 00:26:47.807185 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 13 00:26:48.033169 sudo[2345]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 13 00:26:48.033385 sudo[2345]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 00:26:49.519601 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 13 00:26:49.531199 (dockerd)[2363]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 13 00:26:50.268403 dockerd[2363]: time="2025-10-13T00:26:50.267109148Z" level=info msg="Starting up" Oct 13 00:26:50.269816 dockerd[2363]: time="2025-10-13T00:26:50.269522516Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 13 00:26:50.276861 dockerd[2363]: time="2025-10-13T00:26:50.276838668Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 13 00:26:50.304690 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2974323677-merged.mount: Deactivated successfully. Oct 13 00:26:50.395837 dockerd[2363]: time="2025-10-13T00:26:50.395803092Z" level=info msg="Loading containers: start." Oct 13 00:26:50.457979 kernel: Initializing XFRM netlink socket Oct 13 00:26:50.825548 systemd-networkd[1476]: docker0: Link UP Oct 13 00:26:50.839589 dockerd[2363]: time="2025-10-13T00:26:50.839546684Z" level=info msg="Loading containers: done." Oct 13 00:26:50.858449 dockerd[2363]: time="2025-10-13T00:26:50.858191220Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 13 00:26:50.858449 dockerd[2363]: time="2025-10-13T00:26:50.858256252Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 13 00:26:50.858449 dockerd[2363]: time="2025-10-13T00:26:50.858324132Z" level=info msg="Initializing buildkit" Oct 13 00:26:50.904747 dockerd[2363]: time="2025-10-13T00:26:50.904686452Z" level=info msg="Completed buildkit initialization" Oct 13 00:26:50.910321 dockerd[2363]: time="2025-10-13T00:26:50.910267620Z" level=info msg="Daemon has completed initialization" Oct 13 00:26:50.910451 dockerd[2363]: time="2025-10-13T00:26:50.910347148Z" level=info msg="API listen on /run/docker.sock" Oct 13 00:26:50.910662 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 13 00:26:51.651679 containerd[1881]: time="2025-10-13T00:26:51.651427796Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 13 00:26:52.345797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount517436087.mount: Deactivated successfully. Oct 13 00:26:54.284091 containerd[1881]: time="2025-10-13T00:26:54.284037236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:26:54.287570 containerd[1881]: time="2025-10-13T00:26:54.287526924Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363685" Oct 13 00:26:54.290716 containerd[1881]: time="2025-10-13T00:26:54.290470028Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:26:54.294589 containerd[1881]: time="2025-10-13T00:26:54.294567308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:26:54.295235 containerd[1881]: time="2025-10-13T00:26:54.295205780Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 2.643743192s" Oct 13 00:26:54.295235 containerd[1881]: time="2025-10-13T00:26:54.295235988Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Oct 13 00:26:54.295774 containerd[1881]: time="2025-10-13T00:26:54.295750796Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 13 00:26:56.081059 containerd[1881]: time="2025-10-13T00:26:56.081015940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:26:56.083703 containerd[1881]: time="2025-10-13T00:26:56.083681092Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531200" Oct 13 00:26:56.086959 containerd[1881]: time="2025-10-13T00:26:56.086921684Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:26:56.091789 containerd[1881]: time="2025-10-13T00:26:56.091756676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:26:56.092975 containerd[1881]: time="2025-10-13T00:26:56.092854620Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.79708096s" Oct 13 00:26:56.092975 containerd[1881]: time="2025-10-13T00:26:56.092881124Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Oct 13 00:26:56.093436 containerd[1881]: time="2025-10-13T00:26:56.093368492Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 13 00:26:56.771241 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 13 00:26:56.772410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:26:56.870513 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:26:56.879144 (kubelet)[2642]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 00:26:56.903974 kubelet[2642]: E1013 00:26:56.903911 2642 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 00:26:56.905921 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 00:26:56.906134 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 00:26:56.907021 systemd[1]: kubelet.service: Consumed 101ms CPU time, 107M memory peak. Oct 13 00:26:57.584167 containerd[1881]: time="2025-10-13T00:26:57.584117612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:26:57.587557 containerd[1881]: time="2025-10-13T00:26:57.587529948Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484324" Oct 13 00:26:57.590545 containerd[1881]: time="2025-10-13T00:26:57.590508140Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:26:57.594813 containerd[1881]: time="2025-10-13T00:26:57.594788660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:26:57.595846 containerd[1881]: time="2025-10-13T00:26:57.595818908Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.502274s" Oct 13 00:26:57.595882 containerd[1881]: time="2025-10-13T00:26:57.595847964Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Oct 13 00:26:57.596242 containerd[1881]: time="2025-10-13T00:26:57.596224228Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 13 00:26:58.565089 chronyd[1836]: Selected source PHC0 Oct 13 00:26:58.913011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount931474409.mount: Deactivated successfully. Oct 13 00:26:59.183910 containerd[1881]: time="2025-10-13T00:26:59.183351076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:26:59.186793 containerd[1881]: time="2025-10-13T00:26:59.186767565Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417817" Oct 13 00:26:59.190828 containerd[1881]: time="2025-10-13T00:26:59.190788812Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:26:59.195039 containerd[1881]: time="2025-10-13T00:26:59.194996723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:26:59.195377 containerd[1881]: time="2025-10-13T00:26:59.195227317Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.598978795s" Oct 13 00:26:59.195377 containerd[1881]: time="2025-10-13T00:26:59.195255702Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Oct 13 00:26:59.195860 containerd[1881]: time="2025-10-13T00:26:59.195815265Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 13 00:26:59.835114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2638566603.mount: Deactivated successfully. Oct 13 00:27:01.192809 containerd[1881]: time="2025-10-13T00:27:01.192745355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:27:01.195741 containerd[1881]: time="2025-10-13T00:27:01.195708955Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Oct 13 00:27:01.198705 containerd[1881]: time="2025-10-13T00:27:01.198680179Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:27:01.203475 containerd[1881]: time="2025-10-13T00:27:01.203433227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:27:01.204087 containerd[1881]: time="2025-10-13T00:27:01.203769515Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.007835751s" Oct 13 00:27:01.204087 containerd[1881]: time="2025-10-13T00:27:01.203796715Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Oct 13 00:27:01.204397 containerd[1881]: time="2025-10-13T00:27:01.204262683Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 13 00:27:01.760397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4273692492.mount: Deactivated successfully. Oct 13 00:27:01.780977 containerd[1881]: time="2025-10-13T00:27:01.780891427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 00:27:01.783771 containerd[1881]: time="2025-10-13T00:27:01.783631819Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Oct 13 00:27:01.786705 containerd[1881]: time="2025-10-13T00:27:01.786677795Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 00:27:01.790980 containerd[1881]: time="2025-10-13T00:27:01.790490931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 00:27:01.790980 containerd[1881]: time="2025-10-13T00:27:01.790820691Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 586.532984ms" Oct 13 00:27:01.790980 containerd[1881]: time="2025-10-13T00:27:01.790847235Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Oct 13 00:27:01.791509 containerd[1881]: time="2025-10-13T00:27:01.791491387Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 13 00:27:02.561270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount527606994.mount: Deactivated successfully. Oct 13 00:27:05.611689 containerd[1881]: time="2025-10-13T00:27:05.611628571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:27:05.617649 containerd[1881]: time="2025-10-13T00:27:05.617601571Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Oct 13 00:27:05.621416 containerd[1881]: time="2025-10-13T00:27:05.621373467Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:27:05.625947 containerd[1881]: time="2025-10-13T00:27:05.625896027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:27:05.626888 containerd[1881]: time="2025-10-13T00:27:05.626426427Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.834849584s" Oct 13 00:27:05.626888 containerd[1881]: time="2025-10-13T00:27:05.626455931Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Oct 13 00:27:07.021197 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 13 00:27:07.024100 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:27:07.246089 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:27:07.253309 (kubelet)[2800]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 00:27:07.286238 kubelet[2800]: E1013 00:27:07.286106 2800 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 00:27:07.288495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 00:27:07.288725 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 00:27:07.289287 systemd[1]: kubelet.service: Consumed 105ms CPU time, 107.2M memory peak. Oct 13 00:27:07.928348 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:27:07.928761 systemd[1]: kubelet.service: Consumed 105ms CPU time, 107.2M memory peak. Oct 13 00:27:07.931305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:27:07.950449 systemd[1]: Reload requested from client PID 2814 ('systemctl') (unit session-9.scope)... Oct 13 00:27:07.950564 systemd[1]: Reloading... Oct 13 00:27:08.037101 zram_generator::config[2860]: No configuration found. Oct 13 00:27:08.187694 systemd[1]: Reloading finished in 236 ms. Oct 13 00:27:08.239284 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 13 00:27:08.239492 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 13 00:27:08.239823 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:27:08.239861 systemd[1]: kubelet.service: Consumed 70ms CPU time, 95M memory peak. Oct 13 00:27:08.241192 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:27:10.326919 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:27:10.335811 (kubelet)[2927]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 00:27:10.362559 kubelet[2927]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 00:27:10.362559 kubelet[2927]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 00:27:10.362559 kubelet[2927]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 00:27:10.363130 kubelet[2927]: I1013 00:27:10.362968 2927 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 00:27:10.505419 kubelet[2927]: I1013 00:27:10.505382 2927 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 13 00:27:10.505660 kubelet[2927]: I1013 00:27:10.505608 2927 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 00:27:10.505927 kubelet[2927]: I1013 00:27:10.505914 2927 server.go:954] "Client rotation is on, will bootstrap in background" Oct 13 00:27:10.524417 kubelet[2927]: E1013 00:27:10.524369 2927 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Oct 13 00:27:10.524802 kubelet[2927]: I1013 00:27:10.524784 2927 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 00:27:10.529312 kubelet[2927]: I1013 00:27:10.529299 2927 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 00:27:10.531776 kubelet[2927]: I1013 00:27:10.531759 2927 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 13 00:27:10.532056 kubelet[2927]: I1013 00:27:10.532035 2927 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 00:27:10.532238 kubelet[2927]: I1013 00:27:10.532114 2927 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.1.0-a-27183f81a1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 00:27:10.532358 kubelet[2927]: I1013 00:27:10.532347 2927 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 00:27:10.532403 kubelet[2927]: I1013 00:27:10.532396 2927 container_manager_linux.go:304] "Creating device plugin manager" Oct 13 00:27:10.532569 kubelet[2927]: I1013 00:27:10.532558 2927 state_mem.go:36] "Initialized new in-memory state store" Oct 13 00:27:10.535055 kubelet[2927]: I1013 00:27:10.535037 2927 kubelet.go:446] "Attempting to sync node with API server" Oct 13 00:27:10.535139 kubelet[2927]: I1013 00:27:10.535130 2927 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 00:27:10.535206 kubelet[2927]: I1013 00:27:10.535197 2927 kubelet.go:352] "Adding apiserver pod source" Oct 13 00:27:10.535255 kubelet[2927]: I1013 00:27:10.535247 2927 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 00:27:10.536706 kubelet[2927]: W1013 00:27:10.536658 2927 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-a-27183f81a1&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Oct 13 00:27:10.536762 kubelet[2927]: E1013 00:27:10.536717 2927 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-a-27183f81a1&limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Oct 13 00:27:10.537652 kubelet[2927]: W1013 00:27:10.537608 2927 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Oct 13 00:27:10.537652 kubelet[2927]: E1013 00:27:10.537646 2927 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Oct 13 00:27:10.537762 kubelet[2927]: I1013 00:27:10.537733 2927 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 00:27:10.538070 kubelet[2927]: I1013 00:27:10.538052 2927 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 13 00:27:10.538108 kubelet[2927]: W1013 00:27:10.538100 2927 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 13 00:27:10.538763 kubelet[2927]: I1013 00:27:10.538740 2927 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 13 00:27:10.538802 kubelet[2927]: I1013 00:27:10.538770 2927 server.go:1287] "Started kubelet" Oct 13 00:27:10.542321 kubelet[2927]: I1013 00:27:10.542300 2927 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 00:27:10.542719 kubelet[2927]: E1013 00:27:10.542632 2927 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.34:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.34:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.1.0-a-27183f81a1.186de56028f8f13c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.1.0-a-27183f81a1,UID:ci-4459.1.0-a-27183f81a1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.1.0-a-27183f81a1,},FirstTimestamp:2025-10-13 00:27:10.538756412 +0000 UTC m=+0.200312787,LastTimestamp:2025-10-13 00:27:10.538756412 +0000 UTC m=+0.200312787,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.1.0-a-27183f81a1,}" Oct 13 00:27:10.543041 kubelet[2927]: I1013 00:27:10.543025 2927 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 00:27:10.543175 kubelet[2927]: I1013 00:27:10.543157 2927 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 00:27:10.543771 kubelet[2927]: I1013 00:27:10.543756 2927 server.go:479] "Adding debug handlers to kubelet server" Oct 13 00:27:10.545139 kubelet[2927]: I1013 00:27:10.545116 2927 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 13 00:27:10.545503 kubelet[2927]: E1013 00:27:10.545474 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:10.548131 kubelet[2927]: I1013 00:27:10.548081 2927 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 00:27:10.548305 kubelet[2927]: I1013 00:27:10.548287 2927 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 00:27:10.548869 kubelet[2927]: I1013 00:27:10.548847 2927 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 13 00:27:10.548916 kubelet[2927]: I1013 00:27:10.548893 2927 reconciler.go:26] "Reconciler: start to sync state" Oct 13 00:27:10.549203 kubelet[2927]: E1013 00:27:10.549176 2927 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-a-27183f81a1?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="200ms" Oct 13 00:27:10.549257 kubelet[2927]: W1013 00:27:10.549235 2927 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Oct 13 00:27:10.549278 kubelet[2927]: E1013 00:27:10.549267 2927 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Oct 13 00:27:10.551271 kubelet[2927]: I1013 00:27:10.551250 2927 factory.go:221] Registration of the systemd container factory successfully Oct 13 00:27:10.551334 kubelet[2927]: I1013 00:27:10.551322 2927 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 00:27:10.553340 kubelet[2927]: I1013 00:27:10.553317 2927 factory.go:221] Registration of the containerd container factory successfully Oct 13 00:27:10.556585 kubelet[2927]: E1013 00:27:10.556563 2927 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 00:27:10.571119 kubelet[2927]: I1013 00:27:10.571070 2927 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 00:27:10.571119 kubelet[2927]: I1013 00:27:10.571084 2927 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 00:27:10.571119 kubelet[2927]: I1013 00:27:10.571101 2927 state_mem.go:36] "Initialized new in-memory state store" Oct 13 00:27:14.016211 kubelet[2927]: E1013 00:27:10.646415 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.016211 kubelet[2927]: E1013 00:27:10.746786 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.016211 kubelet[2927]: E1013 00:27:10.750289 2927 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-a-27183f81a1?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="400ms" Oct 13 00:27:14.016211 kubelet[2927]: E1013 00:27:10.847334 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.016211 kubelet[2927]: E1013 00:27:10.947743 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.016211 kubelet[2927]: E1013 00:27:11.048524 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.016211 kubelet[2927]: E1013 00:27:11.149069 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.016211 kubelet[2927]: E1013 00:27:11.151565 2927 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-a-27183f81a1?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="800ms" Oct 13 00:27:14.016211 kubelet[2927]: E1013 00:27:11.249127 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.016211 kubelet[2927]: E1013 00:27:11.349678 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.016677 kubelet[2927]: W1013 00:27:11.372253 2927 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Oct 13 00:27:14.016677 kubelet[2927]: E1013 00:27:11.372307 2927 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Oct 13 00:27:14.016677 kubelet[2927]: W1013 00:27:11.378740 2927 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Oct 13 00:27:14.016677 kubelet[2927]: E1013 00:27:11.378763 2927 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Oct 13 00:27:14.016677 kubelet[2927]: E1013 00:27:11.450275 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.016677 kubelet[2927]: E1013 00:27:11.550682 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.016677 kubelet[2927]: E1013 00:27:11.651306 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.016677 kubelet[2927]: E1013 00:27:11.751895 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.016677 kubelet[2927]: E1013 00:27:11.852429 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.016830 kubelet[2927]: W1013 00:27:11.915108 2927 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-a-27183f81a1&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Oct 13 00:27:14.016830 kubelet[2927]: E1013 00:27:11.915158 2927 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-a-27183f81a1&limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Oct 13 00:27:14.016830 kubelet[2927]: E1013 00:27:11.952915 2927 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-a-27183f81a1?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="1.6s" Oct 13 00:27:14.016830 kubelet[2927]: E1013 00:27:11.952987 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.016830 kubelet[2927]: E1013 00:27:12.053869 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.016830 kubelet[2927]: E1013 00:27:12.154443 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.016830 kubelet[2927]: E1013 00:27:12.254956 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.016830 kubelet[2927]: E1013 00:27:12.355913 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.016830 kubelet[2927]: E1013 00:27:12.456647 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.016830 kubelet[2927]: E1013 00:27:12.557282 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.017826 kubelet[2927]: E1013 00:27:12.584373 2927 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Oct 13 00:27:14.017826 kubelet[2927]: E1013 00:27:12.657913 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.017826 kubelet[2927]: E1013 00:27:12.758605 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.017826 kubelet[2927]: E1013 00:27:12.859176 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.017826 kubelet[2927]: E1013 00:27:12.959983 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.017826 kubelet[2927]: E1013 00:27:13.061040 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.017826 kubelet[2927]: E1013 00:27:13.161751 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.017826 kubelet[2927]: E1013 00:27:13.262530 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.017826 kubelet[2927]: E1013 00:27:13.363143 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.017826 kubelet[2927]: E1013 00:27:13.463933 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.017984 kubelet[2927]: E1013 00:27:13.553499 2927 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-a-27183f81a1?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="3.2s" Oct 13 00:27:14.017984 kubelet[2927]: E1013 00:27:13.564679 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.017984 kubelet[2927]: E1013 00:27:13.665254 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.017984 kubelet[2927]: E1013 00:27:13.765896 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.017984 kubelet[2927]: W1013 00:27:13.830517 2927 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Oct 13 00:27:14.017984 kubelet[2927]: E1013 00:27:13.830551 2927 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Oct 13 00:27:14.017984 kubelet[2927]: E1013 00:27:13.865999 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.017984 kubelet[2927]: E1013 00:27:13.966825 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.017984 kubelet[2927]: W1013 00:27:13.992438 2927 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Oct 13 00:27:14.018103 kubelet[2927]: E1013 00:27:13.992467 2927 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Oct 13 00:27:14.067321 kubelet[2927]: E1013 00:27:14.067271 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.111936 kubelet[2927]: W1013 00:27:14.111899 2927 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-a-27183f81a1&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Oct 13 00:27:14.112097 kubelet[2927]: E1013 00:27:14.111964 2927 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-a-27183f81a1&limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Oct 13 00:27:14.112852 kubelet[2927]: I1013 00:27:14.112761 2927 policy_none.go:49] "None policy: Start" Oct 13 00:27:14.112852 kubelet[2927]: I1013 00:27:14.112781 2927 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 13 00:27:14.112852 kubelet[2927]: I1013 00:27:14.112792 2927 state_mem.go:35] "Initializing new in-memory state store" Oct 13 00:27:14.125802 kubelet[2927]: I1013 00:27:14.125770 2927 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 13 00:27:14.126659 kubelet[2927]: I1013 00:27:14.126624 2927 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 13 00:27:14.126659 kubelet[2927]: I1013 00:27:14.126652 2927 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 13 00:27:14.126733 kubelet[2927]: I1013 00:27:14.126668 2927 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 00:27:14.126733 kubelet[2927]: I1013 00:27:14.126673 2927 kubelet.go:2382] "Starting kubelet main sync loop" Oct 13 00:27:14.126733 kubelet[2927]: E1013 00:27:14.126708 2927 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 00:27:14.129080 kubelet[2927]: W1013 00:27:14.129023 2927 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Oct 13 00:27:14.129177 kubelet[2927]: E1013 00:27:14.129151 2927 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Oct 13 00:27:14.167867 kubelet[2927]: E1013 00:27:14.167825 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.210082 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 13 00:27:14.223596 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 13 00:27:14.227140 kubelet[2927]: E1013 00:27:14.226819 2927 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 13 00:27:14.227057 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 13 00:27:14.234586 kubelet[2927]: I1013 00:27:14.234556 2927 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 13 00:27:14.234748 kubelet[2927]: I1013 00:27:14.234732 2927 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 00:27:14.234770 kubelet[2927]: I1013 00:27:14.234746 2927 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 00:27:14.235325 kubelet[2927]: I1013 00:27:14.235236 2927 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 00:27:14.236524 kubelet[2927]: E1013 00:27:14.236143 2927 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 00:27:14.236524 kubelet[2927]: E1013 00:27:14.236190 2927 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:14.336767 kubelet[2927]: I1013 00:27:14.336652 2927 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-a-27183f81a1" Oct 13 00:27:14.337194 kubelet[2927]: E1013 00:27:14.337173 2927 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4459.1.0-a-27183f81a1" Oct 13 00:27:14.436444 systemd[1]: Created slice kubepods-burstable-podb67c4a4bd57bf558a522d05a734ae30e.slice - libcontainer container kubepods-burstable-podb67c4a4bd57bf558a522d05a734ae30e.slice. Oct 13 00:27:14.455211 kubelet[2927]: E1013 00:27:14.455016 2927 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-a-27183f81a1\" not found" node="ci-4459.1.0-a-27183f81a1" Oct 13 00:27:14.457841 systemd[1]: Created slice kubepods-burstable-pod452b3a280f09ab84284e9004345f9de1.slice - libcontainer container kubepods-burstable-pod452b3a280f09ab84284e9004345f9de1.slice. Oct 13 00:27:14.465595 kubelet[2927]: E1013 00:27:14.465425 2927 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-a-27183f81a1\" not found" node="ci-4459.1.0-a-27183f81a1" Oct 13 00:27:14.467633 systemd[1]: Created slice kubepods-burstable-pod1bb9708877731109ab0ca44713ada2a4.slice - libcontainer container kubepods-burstable-pod1bb9708877731109ab0ca44713ada2a4.slice. Oct 13 00:27:14.468952 kubelet[2927]: E1013 00:27:14.468889 2927 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-a-27183f81a1\" not found" node="ci-4459.1.0-a-27183f81a1" Oct 13 00:27:14.473063 kubelet[2927]: I1013 00:27:14.473043 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/452b3a280f09ab84284e9004345f9de1-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.1.0-a-27183f81a1\" (UID: \"452b3a280f09ab84284e9004345f9de1\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-a-27183f81a1" Oct 13 00:27:14.473312 kubelet[2927]: I1013 00:27:14.473285 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/452b3a280f09ab84284e9004345f9de1-kubeconfig\") pod \"kube-controller-manager-ci-4459.1.0-a-27183f81a1\" (UID: \"452b3a280f09ab84284e9004345f9de1\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-a-27183f81a1" Oct 13 00:27:14.473421 kubelet[2927]: I1013 00:27:14.473408 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1bb9708877731109ab0ca44713ada2a4-kubeconfig\") pod \"kube-scheduler-ci-4459.1.0-a-27183f81a1\" (UID: \"1bb9708877731109ab0ca44713ada2a4\") " pod="kube-system/kube-scheduler-ci-4459.1.0-a-27183f81a1" Oct 13 00:27:14.473490 kubelet[2927]: I1013 00:27:14.473481 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b67c4a4bd57bf558a522d05a734ae30e-ca-certs\") pod \"kube-apiserver-ci-4459.1.0-a-27183f81a1\" (UID: \"b67c4a4bd57bf558a522d05a734ae30e\") " pod="kube-system/kube-apiserver-ci-4459.1.0-a-27183f81a1" Oct 13 00:27:14.473547 kubelet[2927]: I1013 00:27:14.473537 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b67c4a4bd57bf558a522d05a734ae30e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.1.0-a-27183f81a1\" (UID: \"b67c4a4bd57bf558a522d05a734ae30e\") " pod="kube-system/kube-apiserver-ci-4459.1.0-a-27183f81a1" Oct 13 00:27:14.473672 kubelet[2927]: I1013 00:27:14.473597 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/452b3a280f09ab84284e9004345f9de1-k8s-certs\") pod \"kube-controller-manager-ci-4459.1.0-a-27183f81a1\" (UID: \"452b3a280f09ab84284e9004345f9de1\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-a-27183f81a1" Oct 13 00:27:14.473672 kubelet[2927]: I1013 00:27:14.473612 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/452b3a280f09ab84284e9004345f9de1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.1.0-a-27183f81a1\" (UID: \"452b3a280f09ab84284e9004345f9de1\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-a-27183f81a1" Oct 13 00:27:14.473672 kubelet[2927]: I1013 00:27:14.473625 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b67c4a4bd57bf558a522d05a734ae30e-k8s-certs\") pod \"kube-apiserver-ci-4459.1.0-a-27183f81a1\" (UID: \"b67c4a4bd57bf558a522d05a734ae30e\") " pod="kube-system/kube-apiserver-ci-4459.1.0-a-27183f81a1" Oct 13 00:27:14.473672 kubelet[2927]: I1013 00:27:14.473635 2927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/452b3a280f09ab84284e9004345f9de1-ca-certs\") pod \"kube-controller-manager-ci-4459.1.0-a-27183f81a1\" (UID: \"452b3a280f09ab84284e9004345f9de1\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-a-27183f81a1" Oct 13 00:27:14.539000 kubelet[2927]: I1013 00:27:14.538893 2927 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-a-27183f81a1" Oct 13 00:27:14.539250 kubelet[2927]: E1013 00:27:14.539226 2927 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4459.1.0-a-27183f81a1" Oct 13 00:27:14.760698 containerd[1881]: time="2025-10-13T00:27:14.760417472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.1.0-a-27183f81a1,Uid:b67c4a4bd57bf558a522d05a734ae30e,Namespace:kube-system,Attempt:0,}" Oct 13 00:27:14.767017 containerd[1881]: time="2025-10-13T00:27:14.766977364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.1.0-a-27183f81a1,Uid:452b3a280f09ab84284e9004345f9de1,Namespace:kube-system,Attempt:0,}" Oct 13 00:27:14.769674 containerd[1881]: time="2025-10-13T00:27:14.769647223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.1.0-a-27183f81a1,Uid:1bb9708877731109ab0ca44713ada2a4,Namespace:kube-system,Attempt:0,}" Oct 13 00:27:14.941259 kubelet[2927]: I1013 00:27:14.941227 2927 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-a-27183f81a1" Oct 13 00:27:14.941566 kubelet[2927]: E1013 00:27:14.941545 2927 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4459.1.0-a-27183f81a1" Oct 13 00:27:15.103272 kubelet[2927]: W1013 00:27:15.103238 2927 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Oct 13 00:27:15.103606 kubelet[2927]: E1013 00:27:15.103280 2927 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Oct 13 00:27:15.743198 kubelet[2927]: I1013 00:27:15.743153 2927 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-a-27183f81a1" Oct 13 00:27:15.743767 kubelet[2927]: E1013 00:27:15.743744 2927 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4459.1.0-a-27183f81a1" Oct 13 00:27:16.678803 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Oct 13 00:27:16.728902 kubelet[2927]: E1013 00:27:16.728860 2927 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Oct 13 00:27:16.754024 kubelet[2927]: E1013 00:27:16.753995 2927 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-a-27183f81a1?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="6.4s" Oct 13 00:27:17.347739 kubelet[2927]: I1013 00:27:17.347708 2927 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-a-27183f81a1" Oct 13 00:27:17.349026 kubelet[2927]: E1013 00:27:17.348224 2927 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4459.1.0-a-27183f81a1" Oct 13 00:27:17.394112 kubelet[2927]: W1013 00:27:17.394050 2927 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Oct 13 00:27:17.394285 kubelet[2927]: E1013 00:27:17.394263 2927 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Oct 13 00:27:17.396880 kubelet[2927]: W1013 00:27:17.396834 2927 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Oct 13 00:27:17.397043 kubelet[2927]: E1013 00:27:17.397021 2927 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Oct 13 00:27:19.718684 kubelet[2927]: E1013 00:27:19.717858 2927 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.34:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.34:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.1.0-a-27183f81a1.186de56028f8f13c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.1.0-a-27183f81a1,UID:ci-4459.1.0-a-27183f81a1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.1.0-a-27183f81a1,},FirstTimestamp:2025-10-13 00:27:10.538756412 +0000 UTC m=+0.200312787,LastTimestamp:2025-10-13 00:27:10.538756412 +0000 UTC m=+0.200312787,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.1.0-a-27183f81a1,}" Oct 13 00:27:20.956538 kubelet[2927]: W1013 00:27:19.761899 2927 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-a-27183f81a1&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Oct 13 00:27:20.956538 kubelet[2927]: E1013 00:27:19.761976 2927 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-a-27183f81a1&limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Oct 13 00:27:20.956538 kubelet[2927]: W1013 00:27:19.789626 2927 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Oct 13 00:27:20.956538 kubelet[2927]: E1013 00:27:19.789678 2927 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Oct 13 00:27:20.956538 kubelet[2927]: I1013 00:27:20.550079 2927 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-a-27183f81a1" Oct 13 00:27:20.956538 kubelet[2927]: E1013 00:27:20.550355 2927 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4459.1.0-a-27183f81a1" Oct 13 00:27:20.956906 update_engine[1863]: I20251013 00:27:20.505878 1863 update_attempter.cc:509] Updating boot flags... Oct 13 00:27:21.820042 kubelet[2927]: W1013 00:27:21.819999 2927 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Oct 13 00:27:21.820042 kubelet[2927]: E1013 00:27:21.820044 2927 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Oct 13 00:27:21.873639 containerd[1881]: time="2025-10-13T00:27:21.873598443Z" level=info msg="connecting to shim 7c3227a166e56e6f022dd802ddaeb0a406508786a1db98856a8ebae7acc02799" address="unix:///run/containerd/s/4667f9ead78429bfab53bd26c0a5bfca78fdba3e7cef30327d038f551cb75993" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:27:21.895110 systemd[1]: Started cri-containerd-7c3227a166e56e6f022dd802ddaeb0a406508786a1db98856a8ebae7acc02799.scope - libcontainer container 7c3227a166e56e6f022dd802ddaeb0a406508786a1db98856a8ebae7acc02799. Oct 13 00:27:21.927650 containerd[1881]: time="2025-10-13T00:27:21.927575762Z" level=info msg="connecting to shim 6f7dda8ae2941eb1199aaf6f30dff4d241e063fc0b6c1468f2704686fa983ef7" address="unix:///run/containerd/s/ecd8d82226c101dca544455b7f7eaf99f8409babdc071497b4f6155c23db2b0a" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:27:21.930539 containerd[1881]: time="2025-10-13T00:27:21.930340169Z" level=info msg="connecting to shim e3a7be091c8092b3c6b2a74d1dd30d73cc11e33246b7d145141723f8a4e2d1eb" address="unix:///run/containerd/s/7b723eec3cc91e1b4ad83808f5dacf45aa89754f3f9ebc83fdede276c452698b" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:27:21.951098 systemd[1]: Started cri-containerd-6f7dda8ae2941eb1199aaf6f30dff4d241e063fc0b6c1468f2704686fa983ef7.scope - libcontainer container 6f7dda8ae2941eb1199aaf6f30dff4d241e063fc0b6c1468f2704686fa983ef7. Oct 13 00:27:21.954272 systemd[1]: Started cri-containerd-e3a7be091c8092b3c6b2a74d1dd30d73cc11e33246b7d145141723f8a4e2d1eb.scope - libcontainer container e3a7be091c8092b3c6b2a74d1dd30d73cc11e33246b7d145141723f8a4e2d1eb. Oct 13 00:27:21.957227 containerd[1881]: time="2025-10-13T00:27:21.957081667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.1.0-a-27183f81a1,Uid:452b3a280f09ab84284e9004345f9de1,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c3227a166e56e6f022dd802ddaeb0a406508786a1db98856a8ebae7acc02799\"" Oct 13 00:27:21.960202 containerd[1881]: time="2025-10-13T00:27:21.960172101Z" level=info msg="CreateContainer within sandbox \"7c3227a166e56e6f022dd802ddaeb0a406508786a1db98856a8ebae7acc02799\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 13 00:27:22.114511 containerd[1881]: time="2025-10-13T00:27:22.114353256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.1.0-a-27183f81a1,Uid:b67c4a4bd57bf558a522d05a734ae30e,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f7dda8ae2941eb1199aaf6f30dff4d241e063fc0b6c1468f2704686fa983ef7\"" Oct 13 00:27:22.116611 containerd[1881]: time="2025-10-13T00:27:22.116582243Z" level=info msg="CreateContainer within sandbox \"6f7dda8ae2941eb1199aaf6f30dff4d241e063fc0b6c1468f2704686fa983ef7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 13 00:27:22.163809 containerd[1881]: time="2025-10-13T00:27:22.163697715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.1.0-a-27183f81a1,Uid:1bb9708877731109ab0ca44713ada2a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3a7be091c8092b3c6b2a74d1dd30d73cc11e33246b7d145141723f8a4e2d1eb\"" Oct 13 00:27:22.166227 containerd[1881]: time="2025-10-13T00:27:22.166192231Z" level=info msg="CreateContainer within sandbox \"e3a7be091c8092b3c6b2a74d1dd30d73cc11e33246b7d145141723f8a4e2d1eb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 13 00:27:22.259540 containerd[1881]: time="2025-10-13T00:27:22.258805370Z" level=info msg="Container 8ddf4ff912c1e7bc95f276efd25081518941e7c5f8bc51cb4f71e83ddd1487a8: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:27:22.608438 containerd[1881]: time="2025-10-13T00:27:22.607906927Z" level=info msg="Container ccd547a12ef77900dc69f28ef90d0f23ae23ef89a8ec4ecf394980e7278a6ff4: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:27:22.720154 containerd[1881]: time="2025-10-13T00:27:22.720105816Z" level=info msg="Container b1e7c87bc59a69824f060bb962f862a34f16e0424ccd09bfea15832dfd7dbfb8: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:27:22.722599 containerd[1881]: time="2025-10-13T00:27:22.722453781Z" level=info msg="CreateContainer within sandbox \"7c3227a166e56e6f022dd802ddaeb0a406508786a1db98856a8ebae7acc02799\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8ddf4ff912c1e7bc95f276efd25081518941e7c5f8bc51cb4f71e83ddd1487a8\"" Oct 13 00:27:22.723329 containerd[1881]: time="2025-10-13T00:27:22.723298976Z" level=info msg="StartContainer for \"8ddf4ff912c1e7bc95f276efd25081518941e7c5f8bc51cb4f71e83ddd1487a8\"" Oct 13 00:27:22.724392 containerd[1881]: time="2025-10-13T00:27:22.724355059Z" level=info msg="connecting to shim 8ddf4ff912c1e7bc95f276efd25081518941e7c5f8bc51cb4f71e83ddd1487a8" address="unix:///run/containerd/s/4667f9ead78429bfab53bd26c0a5bfca78fdba3e7cef30327d038f551cb75993" protocol=ttrpc version=3 Oct 13 00:27:22.742116 systemd[1]: Started cri-containerd-8ddf4ff912c1e7bc95f276efd25081518941e7c5f8bc51cb4f71e83ddd1487a8.scope - libcontainer container 8ddf4ff912c1e7bc95f276efd25081518941e7c5f8bc51cb4f71e83ddd1487a8. Oct 13 00:27:22.817644 containerd[1881]: time="2025-10-13T00:27:22.817595145Z" level=info msg="StartContainer for \"8ddf4ff912c1e7bc95f276efd25081518941e7c5f8bc51cb4f71e83ddd1487a8\" returns successfully" Oct 13 00:27:23.020251 containerd[1881]: time="2025-10-13T00:27:23.020119810Z" level=info msg="CreateContainer within sandbox \"e3a7be091c8092b3c6b2a74d1dd30d73cc11e33246b7d145141723f8a4e2d1eb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b1e7c87bc59a69824f060bb962f862a34f16e0424ccd09bfea15832dfd7dbfb8\"" Oct 13 00:27:23.021991 containerd[1881]: time="2025-10-13T00:27:23.021511512Z" level=info msg="StartContainer for \"b1e7c87bc59a69824f060bb962f862a34f16e0424ccd09bfea15832dfd7dbfb8\"" Oct 13 00:27:23.022568 containerd[1881]: time="2025-10-13T00:27:23.022548777Z" level=info msg="connecting to shim b1e7c87bc59a69824f060bb962f862a34f16e0424ccd09bfea15832dfd7dbfb8" address="unix:///run/containerd/s/7b723eec3cc91e1b4ad83808f5dacf45aa89754f3f9ebc83fdede276c452698b" protocol=ttrpc version=3 Oct 13 00:27:23.023915 containerd[1881]: time="2025-10-13T00:27:23.023894893Z" level=info msg="CreateContainer within sandbox \"6f7dda8ae2941eb1199aaf6f30dff4d241e063fc0b6c1468f2704686fa983ef7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ccd547a12ef77900dc69f28ef90d0f23ae23ef89a8ec4ecf394980e7278a6ff4\"" Oct 13 00:27:23.026110 containerd[1881]: time="2025-10-13T00:27:23.024738729Z" level=info msg="StartContainer for \"ccd547a12ef77900dc69f28ef90d0f23ae23ef89a8ec4ecf394980e7278a6ff4\"" Oct 13 00:27:23.030422 containerd[1881]: time="2025-10-13T00:27:23.030402921Z" level=info msg="connecting to shim ccd547a12ef77900dc69f28ef90d0f23ae23ef89a8ec4ecf394980e7278a6ff4" address="unix:///run/containerd/s/ecd8d82226c101dca544455b7f7eaf99f8409babdc071497b4f6155c23db2b0a" protocol=ttrpc version=3 Oct 13 00:27:23.052216 systemd[1]: Started cri-containerd-b1e7c87bc59a69824f060bb962f862a34f16e0424ccd09bfea15832dfd7dbfb8.scope - libcontainer container b1e7c87bc59a69824f060bb962f862a34f16e0424ccd09bfea15832dfd7dbfb8. Oct 13 00:27:23.068212 systemd[1]: Started cri-containerd-ccd547a12ef77900dc69f28ef90d0f23ae23ef89a8ec4ecf394980e7278a6ff4.scope - libcontainer container ccd547a12ef77900dc69f28ef90d0f23ae23ef89a8ec4ecf394980e7278a6ff4. Oct 13 00:27:23.120892 containerd[1881]: time="2025-10-13T00:27:23.120830997Z" level=info msg="StartContainer for \"ccd547a12ef77900dc69f28ef90d0f23ae23ef89a8ec4ecf394980e7278a6ff4\" returns successfully" Oct 13 00:27:23.147372 containerd[1881]: time="2025-10-13T00:27:23.147281779Z" level=info msg="StartContainer for \"b1e7c87bc59a69824f060bb962f862a34f16e0424ccd09bfea15832dfd7dbfb8\" returns successfully" Oct 13 00:27:23.151439 kubelet[2927]: E1013 00:27:23.151374 2927 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-a-27183f81a1\" not found" node="ci-4459.1.0-a-27183f81a1" Oct 13 00:27:23.154748 kubelet[2927]: E1013 00:27:23.154584 2927 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-a-27183f81a1?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="7s" Oct 13 00:27:23.155148 kubelet[2927]: E1013 00:27:23.155065 2927 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-a-27183f81a1\" not found" node="ci-4459.1.0-a-27183f81a1" Oct 13 00:27:23.157544 kubelet[2927]: E1013 00:27:23.157509 2927 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-a-27183f81a1\" not found" node="ci-4459.1.0-a-27183f81a1" Oct 13 00:27:24.160004 kubelet[2927]: E1013 00:27:24.159197 2927 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-a-27183f81a1\" not found" node="ci-4459.1.0-a-27183f81a1" Oct 13 00:27:24.160567 kubelet[2927]: E1013 00:27:24.159257 2927 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-a-27183f81a1\" not found" node="ci-4459.1.0-a-27183f81a1" Oct 13 00:27:24.160700 kubelet[2927]: E1013 00:27:24.159328 2927 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-a-27183f81a1\" not found" node="ci-4459.1.0-a-27183f81a1" Oct 13 00:27:24.236483 kubelet[2927]: E1013 00:27:24.236412 2927 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:24.382185 kubelet[2927]: E1013 00:27:24.382149 2927 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4459.1.0-a-27183f81a1" not found Oct 13 00:27:24.737027 kubelet[2927]: E1013 00:27:24.736986 2927 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4459.1.0-a-27183f81a1" not found Oct 13 00:27:25.161138 kubelet[2927]: E1013 00:27:25.161106 2927 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-a-27183f81a1\" not found" node="ci-4459.1.0-a-27183f81a1" Oct 13 00:27:25.176319 kubelet[2927]: E1013 00:27:25.176285 2927 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4459.1.0-a-27183f81a1" not found Oct 13 00:27:26.055927 kubelet[2927]: E1013 00:27:26.055874 2927 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4459.1.0-a-27183f81a1" not found Oct 13 00:27:26.952933 kubelet[2927]: I1013 00:27:26.952701 2927 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-a-27183f81a1" Oct 13 00:27:26.962225 kubelet[2927]: I1013 00:27:26.962195 2927 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.1.0-a-27183f81a1" Oct 13 00:27:26.962225 kubelet[2927]: E1013 00:27:26.962225 2927 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459.1.0-a-27183f81a1\": node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:26.969069 kubelet[2927]: E1013 00:27:26.969044 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:27.070081 kubelet[2927]: E1013 00:27:27.070044 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:27.170893 kubelet[2927]: E1013 00:27:27.170865 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:27.271752 kubelet[2927]: E1013 00:27:27.271716 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:27.372263 kubelet[2927]: E1013 00:27:27.372215 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:27.473043 kubelet[2927]: E1013 00:27:27.473000 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:27.481830 systemd[1]: Reload requested from client PID 3263 ('systemctl') (unit session-9.scope)... Oct 13 00:27:27.482130 systemd[1]: Reloading... Oct 13 00:27:27.486595 kubelet[2927]: E1013 00:27:27.486462 2927 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-a-27183f81a1\" not found" node="ci-4459.1.0-a-27183f81a1" Oct 13 00:27:27.561038 zram_generator::config[3310]: No configuration found. Oct 13 00:27:27.573730 kubelet[2927]: E1013 00:27:27.573687 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:27.674482 kubelet[2927]: E1013 00:27:27.674433 2927 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-a-27183f81a1\" not found" Oct 13 00:27:27.732274 systemd[1]: Reloading finished in 249 ms. Oct 13 00:27:27.760177 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:27:27.775850 systemd[1]: kubelet.service: Deactivated successfully. Oct 13 00:27:27.776083 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:27:27.776145 systemd[1]: kubelet.service: Consumed 474ms CPU time, 126.8M memory peak. Oct 13 00:27:27.777831 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 00:27:27.930644 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 00:27:27.934646 (kubelet)[3373]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 00:27:28.001696 kubelet[3373]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 00:27:28.001696 kubelet[3373]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 00:27:28.001696 kubelet[3373]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 00:27:28.001696 kubelet[3373]: I1013 00:27:28.001359 3373 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 00:27:28.007715 kubelet[3373]: I1013 00:27:28.007683 3373 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 13 00:27:28.007715 kubelet[3373]: I1013 00:27:28.007709 3373 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 00:27:28.007902 kubelet[3373]: I1013 00:27:28.007886 3373 server.go:954] "Client rotation is on, will bootstrap in background" Oct 13 00:27:28.011085 kubelet[3373]: I1013 00:27:28.011034 3373 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 13 00:27:28.012879 kubelet[3373]: I1013 00:27:28.012808 3373 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 00:27:28.017542 kubelet[3373]: I1013 00:27:28.017519 3373 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 00:27:28.019911 kubelet[3373]: I1013 00:27:28.019889 3373 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 13 00:27:28.020111 kubelet[3373]: I1013 00:27:28.020086 3373 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 00:27:28.020237 kubelet[3373]: I1013 00:27:28.020110 3373 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.1.0-a-27183f81a1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 00:27:28.020309 kubelet[3373]: I1013 00:27:28.020243 3373 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 00:27:28.020309 kubelet[3373]: I1013 00:27:28.020250 3373 container_manager_linux.go:304] "Creating device plugin manager" Oct 13 00:27:28.020309 kubelet[3373]: I1013 00:27:28.020286 3373 state_mem.go:36] "Initialized new in-memory state store" Oct 13 00:27:28.020474 kubelet[3373]: I1013 00:27:28.020460 3373 kubelet.go:446] "Attempting to sync node with API server" Oct 13 00:27:28.020503 kubelet[3373]: I1013 00:27:28.020476 3373 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 00:27:28.020503 kubelet[3373]: I1013 00:27:28.020494 3373 kubelet.go:352] "Adding apiserver pod source" Oct 13 00:27:28.021241 kubelet[3373]: I1013 00:27:28.020505 3373 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 00:27:28.021925 kubelet[3373]: I1013 00:27:28.021907 3373 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 00:27:28.022658 kubelet[3373]: I1013 00:27:28.022639 3373 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 13 00:27:28.023534 kubelet[3373]: I1013 00:27:28.023519 3373 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 13 00:27:28.025018 kubelet[3373]: I1013 00:27:28.024999 3373 server.go:1287] "Started kubelet" Oct 13 00:27:28.027000 kubelet[3373]: I1013 00:27:28.026964 3373 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 00:27:28.028636 kubelet[3373]: I1013 00:27:28.028477 3373 server.go:479] "Adding debug handlers to kubelet server" Oct 13 00:27:28.030992 kubelet[3373]: I1013 00:27:28.028531 3373 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 00:27:28.037172 kubelet[3373]: I1013 00:27:28.037122 3373 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 00:27:28.037431 kubelet[3373]: I1013 00:27:28.037416 3373 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 00:27:28.039529 kubelet[3373]: I1013 00:27:28.028655 3373 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 00:27:28.039686 kubelet[3373]: I1013 00:27:28.039558 3373 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 13 00:27:28.039874 kubelet[3373]: I1013 00:27:28.039860 3373 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 13 00:27:28.043142 kubelet[3373]: I1013 00:27:28.043116 3373 reconciler.go:26] "Reconciler: start to sync state" Oct 13 00:27:28.052991 kubelet[3373]: I1013 00:27:28.052503 3373 factory.go:221] Registration of the systemd container factory successfully Oct 13 00:27:28.053065 kubelet[3373]: I1013 00:27:28.053023 3373 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 00:27:28.058250 kubelet[3373]: I1013 00:27:28.058226 3373 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 13 00:27:28.059873 kubelet[3373]: I1013 00:27:28.059641 3373 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 13 00:27:28.059873 kubelet[3373]: I1013 00:27:28.059659 3373 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 13 00:27:28.059873 kubelet[3373]: I1013 00:27:28.059675 3373 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 00:27:28.059873 kubelet[3373]: I1013 00:27:28.059681 3373 kubelet.go:2382] "Starting kubelet main sync loop" Oct 13 00:27:28.059873 kubelet[3373]: E1013 00:27:28.059712 3373 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 00:27:28.067308 kubelet[3373]: I1013 00:27:28.067285 3373 factory.go:221] Registration of the containerd container factory successfully Oct 13 00:27:28.068085 kubelet[3373]: E1013 00:27:28.068066 3373 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 00:27:28.106098 kubelet[3373]: I1013 00:27:28.106068 3373 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 00:27:28.106098 kubelet[3373]: I1013 00:27:28.106088 3373 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 00:27:28.106098 kubelet[3373]: I1013 00:27:28.106107 3373 state_mem.go:36] "Initialized new in-memory state store" Oct 13 00:27:28.106484 kubelet[3373]: I1013 00:27:28.106462 3373 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 13 00:27:28.106508 kubelet[3373]: I1013 00:27:28.106481 3373 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 13 00:27:28.106508 kubelet[3373]: I1013 00:27:28.106496 3373 policy_none.go:49] "None policy: Start" Oct 13 00:27:28.106508 kubelet[3373]: I1013 00:27:28.106504 3373 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 13 00:27:28.106562 kubelet[3373]: I1013 00:27:28.106511 3373 state_mem.go:35] "Initializing new in-memory state store" Oct 13 00:27:28.106616 kubelet[3373]: I1013 00:27:28.106603 3373 state_mem.go:75] "Updated machine memory state" Oct 13 00:27:28.109887 kubelet[3373]: I1013 00:27:28.109862 3373 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 13 00:27:28.110291 kubelet[3373]: I1013 00:27:28.110234 3373 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 00:27:28.110341 kubelet[3373]: I1013 00:27:28.110312 3373 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 00:27:28.111602 kubelet[3373]: I1013 00:27:28.111492 3373 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 00:27:28.114374 kubelet[3373]: E1013 00:27:28.114284 3373 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 00:27:28.160477 kubelet[3373]: I1013 00:27:28.160434 3373 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-a-27183f81a1" Oct 13 00:27:29.731267 kubelet[3373]: I1013 00:27:28.160643 3373 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.0-a-27183f81a1" Oct 13 00:27:29.731267 kubelet[3373]: I1013 00:27:28.160434 3373 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.1.0-a-27183f81a1" Oct 13 00:27:29.731267 kubelet[3373]: W1013 00:27:28.173838 3373 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 13 00:27:29.731267 kubelet[3373]: W1013 00:27:28.178611 3373 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 13 00:27:29.731267 kubelet[3373]: W1013 00:27:28.178699 3373 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 13 00:27:29.731267 kubelet[3373]: I1013 00:27:28.220308 3373 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-a-27183f81a1" Oct 13 00:27:29.731267 kubelet[3373]: I1013 00:27:28.231381 3373 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.1.0-a-27183f81a1" Oct 13 00:27:29.731267 kubelet[3373]: I1013 00:27:28.244647 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b67c4a4bd57bf558a522d05a734ae30e-k8s-certs\") pod \"kube-apiserver-ci-4459.1.0-a-27183f81a1\" (UID: \"b67c4a4bd57bf558a522d05a734ae30e\") " pod="kube-system/kube-apiserver-ci-4459.1.0-a-27183f81a1" Oct 13 00:27:29.731267 kubelet[3373]: I1013 00:27:28.244679 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/452b3a280f09ab84284e9004345f9de1-ca-certs\") pod \"kube-controller-manager-ci-4459.1.0-a-27183f81a1\" (UID: \"452b3a280f09ab84284e9004345f9de1\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-a-27183f81a1" Oct 13 00:27:29.731635 kubelet[3373]: I1013 00:27:28.244693 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/452b3a280f09ab84284e9004345f9de1-kubeconfig\") pod \"kube-controller-manager-ci-4459.1.0-a-27183f81a1\" (UID: \"452b3a280f09ab84284e9004345f9de1\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-a-27183f81a1" Oct 13 00:27:29.731635 kubelet[3373]: I1013 00:27:28.244706 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1bb9708877731109ab0ca44713ada2a4-kubeconfig\") pod \"kube-scheduler-ci-4459.1.0-a-27183f81a1\" (UID: \"1bb9708877731109ab0ca44713ada2a4\") " pod="kube-system/kube-scheduler-ci-4459.1.0-a-27183f81a1" Oct 13 00:27:29.731635 kubelet[3373]: I1013 00:27:28.244721 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/452b3a280f09ab84284e9004345f9de1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.1.0-a-27183f81a1\" (UID: \"452b3a280f09ab84284e9004345f9de1\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-a-27183f81a1" Oct 13 00:27:29.731635 kubelet[3373]: I1013 00:27:28.244733 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b67c4a4bd57bf558a522d05a734ae30e-ca-certs\") pod \"kube-apiserver-ci-4459.1.0-a-27183f81a1\" (UID: \"b67c4a4bd57bf558a522d05a734ae30e\") " pod="kube-system/kube-apiserver-ci-4459.1.0-a-27183f81a1" Oct 13 00:27:29.731635 kubelet[3373]: I1013 00:27:28.244744 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b67c4a4bd57bf558a522d05a734ae30e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.1.0-a-27183f81a1\" (UID: \"b67c4a4bd57bf558a522d05a734ae30e\") " pod="kube-system/kube-apiserver-ci-4459.1.0-a-27183f81a1" Oct 13 00:27:29.731708 kubelet[3373]: I1013 00:27:28.244753 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/452b3a280f09ab84284e9004345f9de1-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.1.0-a-27183f81a1\" (UID: \"452b3a280f09ab84284e9004345f9de1\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-a-27183f81a1" Oct 13 00:27:29.731708 kubelet[3373]: I1013 00:27:28.244774 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/452b3a280f09ab84284e9004345f9de1-k8s-certs\") pod \"kube-controller-manager-ci-4459.1.0-a-27183f81a1\" (UID: \"452b3a280f09ab84284e9004345f9de1\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-a-27183f81a1" Oct 13 00:27:29.731708 kubelet[3373]: I1013 00:27:29.021715 3373 apiserver.go:52] "Watching apiserver" Oct 13 00:27:29.731708 kubelet[3373]: I1013 00:27:29.040281 3373 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 13 00:27:29.731708 kubelet[3373]: I1013 00:27:29.096073 3373 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-a-27183f81a1" Oct 13 00:27:29.731708 kubelet[3373]: I1013 00:27:29.096243 3373 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.0-a-27183f81a1" Oct 13 00:27:29.731708 kubelet[3373]: W1013 00:27:29.113600 3373 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 13 00:27:29.731708 kubelet[3373]: E1013 00:27:29.113649 3373 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.1.0-a-27183f81a1\" already exists" pod="kube-system/kube-apiserver-ci-4459.1.0-a-27183f81a1" Oct 13 00:27:29.731708 kubelet[3373]: W1013 00:27:29.121980 3373 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 13 00:27:29.731830 kubelet[3373]: E1013 00:27:29.122022 3373 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.1.0-a-27183f81a1\" already exists" pod="kube-system/kube-scheduler-ci-4459.1.0-a-27183f81a1" Oct 13 00:27:29.731830 kubelet[3373]: I1013 00:27:29.139689 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.1.0-a-27183f81a1" podStartSLOduration=1.139674914 podStartE2EDuration="1.139674914s" podCreationTimestamp="2025-10-13 00:27:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:27:29.122546604 +0000 UTC m=+1.183273930" watchObservedRunningTime="2025-10-13 00:27:29.139674914 +0000 UTC m=+1.200402248" Oct 13 00:27:29.731830 kubelet[3373]: I1013 00:27:29.150098 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.1.0-a-27183f81a1" podStartSLOduration=1.150082901 podStartE2EDuration="1.150082901s" podCreationTimestamp="2025-10-13 00:27:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:27:29.150061581 +0000 UTC m=+1.210788907" watchObservedRunningTime="2025-10-13 00:27:29.150082901 +0000 UTC m=+1.210810227" Oct 13 00:27:29.731830 kubelet[3373]: I1013 00:27:29.150192 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.1.0-a-27183f81a1" podStartSLOduration=1.150189473 podStartE2EDuration="1.150189473s" podCreationTimestamp="2025-10-13 00:27:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:27:29.139948179 +0000 UTC m=+1.200675513" watchObservedRunningTime="2025-10-13 00:27:29.150189473 +0000 UTC m=+1.210916799" Oct 13 00:27:29.731830 kubelet[3373]: I1013 00:27:29.730302 3373 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.1.0-a-27183f81a1" Oct 13 00:27:31.804885 kubelet[3373]: I1013 00:27:31.804838 3373 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 13 00:27:31.805814 containerd[1881]: time="2025-10-13T00:27:31.805649086Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 13 00:27:31.806193 kubelet[3373]: I1013 00:27:31.805897 3373 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 13 00:27:32.155360 systemd[1]: Created slice kubepods-besteffort-pod38dbe725_5707_40ad_a334_412d49c0fab0.slice - libcontainer container kubepods-besteffort-pod38dbe725_5707_40ad_a334_412d49c0fab0.slice. Oct 13 00:27:32.165916 kubelet[3373]: I1013 00:27:32.165885 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/38dbe725-5707-40ad-a334-412d49c0fab0-kube-proxy\") pod \"kube-proxy-5j5vd\" (UID: \"38dbe725-5707-40ad-a334-412d49c0fab0\") " pod="kube-system/kube-proxy-5j5vd" Oct 13 00:27:32.166261 kubelet[3373]: I1013 00:27:32.166232 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38dbe725-5707-40ad-a334-412d49c0fab0-xtables-lock\") pod \"kube-proxy-5j5vd\" (UID: \"38dbe725-5707-40ad-a334-412d49c0fab0\") " pod="kube-system/kube-proxy-5j5vd" Oct 13 00:27:32.166339 kubelet[3373]: I1013 00:27:32.166328 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38dbe725-5707-40ad-a334-412d49c0fab0-lib-modules\") pod \"kube-proxy-5j5vd\" (UID: \"38dbe725-5707-40ad-a334-412d49c0fab0\") " pod="kube-system/kube-proxy-5j5vd" Oct 13 00:27:32.166401 kubelet[3373]: I1013 00:27:32.166388 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmvs4\" (UniqueName: \"kubernetes.io/projected/38dbe725-5707-40ad-a334-412d49c0fab0-kube-api-access-bmvs4\") pod \"kube-proxy-5j5vd\" (UID: \"38dbe725-5707-40ad-a334-412d49c0fab0\") " pod="kube-system/kube-proxy-5j5vd" Oct 13 00:27:32.271932 kubelet[3373]: E1013 00:27:32.271891 3373 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 13 00:27:32.271932 kubelet[3373]: E1013 00:27:32.271924 3373 projected.go:194] Error preparing data for projected volume kube-api-access-bmvs4 for pod kube-system/kube-proxy-5j5vd: configmap "kube-root-ca.crt" not found Oct 13 00:27:32.272099 kubelet[3373]: E1013 00:27:32.272005 3373 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/38dbe725-5707-40ad-a334-412d49c0fab0-kube-api-access-bmvs4 podName:38dbe725-5707-40ad-a334-412d49c0fab0 nodeName:}" failed. No retries permitted until 2025-10-13 00:27:32.771987503 +0000 UTC m=+4.832714837 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bmvs4" (UniqueName: "kubernetes.io/projected/38dbe725-5707-40ad-a334-412d49c0fab0-kube-api-access-bmvs4") pod "kube-proxy-5j5vd" (UID: "38dbe725-5707-40ad-a334-412d49c0fab0") : configmap "kube-root-ca.crt" not found Oct 13 00:27:33.032556 systemd[1]: Created slice kubepods-besteffort-podce0c6aa1_2ce8_4b08_9e9a_9887fcce37a9.slice - libcontainer container kubepods-besteffort-podce0c6aa1_2ce8_4b08_9e9a_9887fcce37a9.slice. Oct 13 00:27:33.063690 containerd[1881]: time="2025-10-13T00:27:33.063416495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5j5vd,Uid:38dbe725-5707-40ad-a334-412d49c0fab0,Namespace:kube-system,Attempt:0,}" Oct 13 00:27:33.072420 kubelet[3373]: I1013 00:27:33.072337 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ce0c6aa1-2ce8-4b08-9e9a-9887fcce37a9-var-lib-calico\") pod \"tigera-operator-755d956888-75slq\" (UID: \"ce0c6aa1-2ce8-4b08-9e9a-9887fcce37a9\") " pod="tigera-operator/tigera-operator-755d956888-75slq" Oct 13 00:27:33.072420 kubelet[3373]: I1013 00:27:33.072374 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2zjr\" (UniqueName: \"kubernetes.io/projected/ce0c6aa1-2ce8-4b08-9e9a-9887fcce37a9-kube-api-access-p2zjr\") pod \"tigera-operator-755d956888-75slq\" (UID: \"ce0c6aa1-2ce8-4b08-9e9a-9887fcce37a9\") " pod="tigera-operator/tigera-operator-755d956888-75slq" Oct 13 00:27:35.585507 kubelet[3373]: E1013 00:27:35.585076 3373 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.525s" Oct 13 00:27:35.621738 containerd[1881]: time="2025-10-13T00:27:35.621458135Z" level=info msg="connecting to shim 89fba49b73ee303a226f457d1408e91f25a0c47b451559e277987f5a0b9d3602" address="unix:///run/containerd/s/eabbbecad19535b282dc12d011fe0719d9b912a1ef980de717d96fe43aa5c0de" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:27:35.639083 systemd[1]: Started cri-containerd-89fba49b73ee303a226f457d1408e91f25a0c47b451559e277987f5a0b9d3602.scope - libcontainer container 89fba49b73ee303a226f457d1408e91f25a0c47b451559e277987f5a0b9d3602. Oct 13 00:27:35.660470 containerd[1881]: time="2025-10-13T00:27:35.660061257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5j5vd,Uid:38dbe725-5707-40ad-a334-412d49c0fab0,Namespace:kube-system,Attempt:0,} returns sandbox id \"89fba49b73ee303a226f457d1408e91f25a0c47b451559e277987f5a0b9d3602\"" Oct 13 00:27:35.663414 containerd[1881]: time="2025-10-13T00:27:35.663361801Z" level=info msg="CreateContainer within sandbox \"89fba49b73ee303a226f457d1408e91f25a0c47b451559e277987f5a0b9d3602\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 13 00:27:35.690811 containerd[1881]: time="2025-10-13T00:27:35.690222662Z" level=info msg="Container 0ebc556fa2c1534b12b367fb3feb8ea522a23751a7648e23422276ba829524b2: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:27:35.693095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3744517735.mount: Deactivated successfully. Oct 13 00:27:35.708840 containerd[1881]: time="2025-10-13T00:27:35.708777250Z" level=info msg="CreateContainer within sandbox \"89fba49b73ee303a226f457d1408e91f25a0c47b451559e277987f5a0b9d3602\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0ebc556fa2c1534b12b367fb3feb8ea522a23751a7648e23422276ba829524b2\"" Oct 13 00:27:35.710049 containerd[1881]: time="2025-10-13T00:27:35.710023012Z" level=info msg="StartContainer for \"0ebc556fa2c1534b12b367fb3feb8ea522a23751a7648e23422276ba829524b2\"" Oct 13 00:27:35.712163 containerd[1881]: time="2025-10-13T00:27:35.712038824Z" level=info msg="connecting to shim 0ebc556fa2c1534b12b367fb3feb8ea522a23751a7648e23422276ba829524b2" address="unix:///run/containerd/s/eabbbecad19535b282dc12d011fe0719d9b912a1ef980de717d96fe43aa5c0de" protocol=ttrpc version=3 Oct 13 00:27:35.728055 systemd[1]: Started cri-containerd-0ebc556fa2c1534b12b367fb3feb8ea522a23751a7648e23422276ba829524b2.scope - libcontainer container 0ebc556fa2c1534b12b367fb3feb8ea522a23751a7648e23422276ba829524b2. Oct 13 00:27:35.735433 containerd[1881]: time="2025-10-13T00:27:35.735337868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-75slq,Uid:ce0c6aa1-2ce8-4b08-9e9a-9887fcce37a9,Namespace:tigera-operator,Attempt:0,}" Oct 13 00:27:35.759164 containerd[1881]: time="2025-10-13T00:27:35.759070231Z" level=info msg="StartContainer for \"0ebc556fa2c1534b12b367fb3feb8ea522a23751a7648e23422276ba829524b2\" returns successfully" Oct 13 00:27:35.774771 containerd[1881]: time="2025-10-13T00:27:35.774736465Z" level=info msg="connecting to shim 665f84f9805c69ed81a342fc397372e3c98d72fb4c4661eb5b89da8dc04ce2ef" address="unix:///run/containerd/s/c2b7b16aa4b77c3b0e0d0479fcef1ac616f9a12dc5ba04fae3aeaef0cb86a79f" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:27:35.798196 systemd[1]: Started cri-containerd-665f84f9805c69ed81a342fc397372e3c98d72fb4c4661eb5b89da8dc04ce2ef.scope - libcontainer container 665f84f9805c69ed81a342fc397372e3c98d72fb4c4661eb5b89da8dc04ce2ef. Oct 13 00:27:35.838275 containerd[1881]: time="2025-10-13T00:27:35.838170715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-75slq,Uid:ce0c6aa1-2ce8-4b08-9e9a-9887fcce37a9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"665f84f9805c69ed81a342fc397372e3c98d72fb4c4661eb5b89da8dc04ce2ef\"" Oct 13 00:27:35.841375 containerd[1881]: time="2025-10-13T00:27:35.841340239Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Oct 13 00:27:36.124503 kubelet[3373]: I1013 00:27:36.124281 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5j5vd" podStartSLOduration=4.124264883 podStartE2EDuration="4.124264883s" podCreationTimestamp="2025-10-13 00:27:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:27:36.124217897 +0000 UTC m=+8.184945247" watchObservedRunningTime="2025-10-13 00:27:36.124264883 +0000 UTC m=+8.184992209" Oct 13 00:27:37.704113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4037897841.mount: Deactivated successfully. Oct 13 00:27:40.819247 containerd[1881]: time="2025-10-13T00:27:40.819188422Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:27:40.821888 containerd[1881]: time="2025-10-13T00:27:40.821852199Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Oct 13 00:27:40.825823 containerd[1881]: time="2025-10-13T00:27:40.825777322Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:27:40.831056 containerd[1881]: time="2025-10-13T00:27:40.830953888Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 4.989569272s" Oct 13 00:27:40.831056 containerd[1881]: time="2025-10-13T00:27:40.830987641Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Oct 13 00:27:40.831988 containerd[1881]: time="2025-10-13T00:27:40.831496610Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:27:40.834362 containerd[1881]: time="2025-10-13T00:27:40.834331281Z" level=info msg="CreateContainer within sandbox \"665f84f9805c69ed81a342fc397372e3c98d72fb4c4661eb5b89da8dc04ce2ef\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 13 00:27:40.859383 containerd[1881]: time="2025-10-13T00:27:40.859299500Z" level=info msg="Container 97e677ec2dbcf93c405f2de8f90b5eeef818c0187851b2d75ed32268ec6b2dcc: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:27:40.872220 containerd[1881]: time="2025-10-13T00:27:40.872181132Z" level=info msg="CreateContainer within sandbox \"665f84f9805c69ed81a342fc397372e3c98d72fb4c4661eb5b89da8dc04ce2ef\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"97e677ec2dbcf93c405f2de8f90b5eeef818c0187851b2d75ed32268ec6b2dcc\"" Oct 13 00:27:40.872867 containerd[1881]: time="2025-10-13T00:27:40.872765703Z" level=info msg="StartContainer for \"97e677ec2dbcf93c405f2de8f90b5eeef818c0187851b2d75ed32268ec6b2dcc\"" Oct 13 00:27:40.873480 containerd[1881]: time="2025-10-13T00:27:40.873456510Z" level=info msg="connecting to shim 97e677ec2dbcf93c405f2de8f90b5eeef818c0187851b2d75ed32268ec6b2dcc" address="unix:///run/containerd/s/c2b7b16aa4b77c3b0e0d0479fcef1ac616f9a12dc5ba04fae3aeaef0cb86a79f" protocol=ttrpc version=3 Oct 13 00:27:40.891057 systemd[1]: Started cri-containerd-97e677ec2dbcf93c405f2de8f90b5eeef818c0187851b2d75ed32268ec6b2dcc.scope - libcontainer container 97e677ec2dbcf93c405f2de8f90b5eeef818c0187851b2d75ed32268ec6b2dcc. Oct 13 00:27:40.913977 containerd[1881]: time="2025-10-13T00:27:40.913932529Z" level=info msg="StartContainer for \"97e677ec2dbcf93c405f2de8f90b5eeef818c0187851b2d75ed32268ec6b2dcc\" returns successfully" Oct 13 00:27:41.136829 kubelet[3373]: I1013 00:27:41.136686 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-75slq" podStartSLOduration=4.144424517 podStartE2EDuration="9.136671879s" podCreationTimestamp="2025-10-13 00:27:32 +0000 UTC" firstStartedPulling="2025-10-13 00:27:35.839843172 +0000 UTC m=+7.900570498" lastFinishedPulling="2025-10-13 00:27:40.832090534 +0000 UTC m=+12.892817860" observedRunningTime="2025-10-13 00:27:41.136564628 +0000 UTC m=+13.197291954" watchObservedRunningTime="2025-10-13 00:27:41.136671879 +0000 UTC m=+13.197399205" Oct 13 00:27:45.960837 sudo[2345]: pam_unix(sudo:session): session closed for user root Oct 13 00:27:46.046734 sshd[2344]: Connection closed by 10.200.16.10 port 38974 Oct 13 00:27:46.050043 sshd-session[2341]: pam_unix(sshd:session): session closed for user core Oct 13 00:27:46.053484 systemd[1]: sshd@6-10.200.20.34:22-10.200.16.10:38974.service: Deactivated successfully. Oct 13 00:27:46.058219 systemd[1]: session-9.scope: Deactivated successfully. Oct 13 00:27:46.058548 systemd[1]: session-9.scope: Consumed 2.955s CPU time, 223.7M memory peak. Oct 13 00:27:46.059683 systemd-logind[1859]: Session 9 logged out. Waiting for processes to exit. Oct 13 00:27:46.061314 systemd-logind[1859]: Removed session 9. Oct 13 00:27:52.752661 systemd[1]: Created slice kubepods-besteffort-podfb1fb7ea_aa4d_40ea_a76b_02e62d51f76a.slice - libcontainer container kubepods-besteffort-podfb1fb7ea_aa4d_40ea_a76b_02e62d51f76a.slice. Oct 13 00:27:52.786658 kubelet[3373]: I1013 00:27:52.786570 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fb1fb7ea-aa4d-40ea-a76b-02e62d51f76a-typha-certs\") pod \"calico-typha-795dc5677b-82jfd\" (UID: \"fb1fb7ea-aa4d-40ea-a76b-02e62d51f76a\") " pod="calico-system/calico-typha-795dc5677b-82jfd" Oct 13 00:27:52.786658 kubelet[3373]: I1013 00:27:52.786608 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb1fb7ea-aa4d-40ea-a76b-02e62d51f76a-tigera-ca-bundle\") pod \"calico-typha-795dc5677b-82jfd\" (UID: \"fb1fb7ea-aa4d-40ea-a76b-02e62d51f76a\") " pod="calico-system/calico-typha-795dc5677b-82jfd" Oct 13 00:27:52.786658 kubelet[3373]: I1013 00:27:52.786623 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpq9j\" (UniqueName: \"kubernetes.io/projected/fb1fb7ea-aa4d-40ea-a76b-02e62d51f76a-kube-api-access-dpq9j\") pod \"calico-typha-795dc5677b-82jfd\" (UID: \"fb1fb7ea-aa4d-40ea-a76b-02e62d51f76a\") " pod="calico-system/calico-typha-795dc5677b-82jfd" Oct 13 00:27:52.939587 systemd[1]: Created slice kubepods-besteffort-pod8ee809b8_7189_45c9_826e_d61803496b70.slice - libcontainer container kubepods-besteffort-pod8ee809b8_7189_45c9_826e_d61803496b70.slice. Oct 13 00:27:52.989149 kubelet[3373]: I1013 00:27:52.988868 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ee809b8-7189-45c9-826e-d61803496b70-xtables-lock\") pod \"calico-node-ckqwb\" (UID: \"8ee809b8-7189-45c9-826e-d61803496b70\") " pod="calico-system/calico-node-ckqwb" Oct 13 00:27:52.989149 kubelet[3373]: I1013 00:27:52.988908 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8ee809b8-7189-45c9-826e-d61803496b70-cni-log-dir\") pod \"calico-node-ckqwb\" (UID: \"8ee809b8-7189-45c9-826e-d61803496b70\") " pod="calico-system/calico-node-ckqwb" Oct 13 00:27:52.989149 kubelet[3373]: I1013 00:27:52.988921 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8ee809b8-7189-45c9-826e-d61803496b70-flexvol-driver-host\") pod \"calico-node-ckqwb\" (UID: \"8ee809b8-7189-45c9-826e-d61803496b70\") " pod="calico-system/calico-node-ckqwb" Oct 13 00:27:52.989919 kubelet[3373]: I1013 00:27:52.988934 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ee809b8-7189-45c9-826e-d61803496b70-lib-modules\") pod \"calico-node-ckqwb\" (UID: \"8ee809b8-7189-45c9-826e-d61803496b70\") " pod="calico-system/calico-node-ckqwb" Oct 13 00:27:52.989919 kubelet[3373]: I1013 00:27:52.989765 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8ee809b8-7189-45c9-826e-d61803496b70-var-run-calico\") pod \"calico-node-ckqwb\" (UID: \"8ee809b8-7189-45c9-826e-d61803496b70\") " pod="calico-system/calico-node-ckqwb" Oct 13 00:27:52.989919 kubelet[3373]: I1013 00:27:52.989785 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8ee809b8-7189-45c9-826e-d61803496b70-policysync\") pod \"calico-node-ckqwb\" (UID: \"8ee809b8-7189-45c9-826e-d61803496b70\") " pod="calico-system/calico-node-ckqwb" Oct 13 00:27:52.989919 kubelet[3373]: I1013 00:27:52.989813 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c95cr\" (UniqueName: \"kubernetes.io/projected/8ee809b8-7189-45c9-826e-d61803496b70-kube-api-access-c95cr\") pod \"calico-node-ckqwb\" (UID: \"8ee809b8-7189-45c9-826e-d61803496b70\") " pod="calico-system/calico-node-ckqwb" Oct 13 00:27:52.989919 kubelet[3373]: I1013 00:27:52.989828 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8ee809b8-7189-45c9-826e-d61803496b70-cni-bin-dir\") pod \"calico-node-ckqwb\" (UID: \"8ee809b8-7189-45c9-826e-d61803496b70\") " pod="calico-system/calico-node-ckqwb" Oct 13 00:27:52.990102 kubelet[3373]: I1013 00:27:52.989845 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8ee809b8-7189-45c9-826e-d61803496b70-cni-net-dir\") pod \"calico-node-ckqwb\" (UID: \"8ee809b8-7189-45c9-826e-d61803496b70\") " pod="calico-system/calico-node-ckqwb" Oct 13 00:27:52.990102 kubelet[3373]: I1013 00:27:52.989860 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ee809b8-7189-45c9-826e-d61803496b70-tigera-ca-bundle\") pod \"calico-node-ckqwb\" (UID: \"8ee809b8-7189-45c9-826e-d61803496b70\") " pod="calico-system/calico-node-ckqwb" Oct 13 00:27:52.990102 kubelet[3373]: I1013 00:27:52.989879 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8ee809b8-7189-45c9-826e-d61803496b70-node-certs\") pod \"calico-node-ckqwb\" (UID: \"8ee809b8-7189-45c9-826e-d61803496b70\") " pod="calico-system/calico-node-ckqwb" Oct 13 00:27:52.990102 kubelet[3373]: I1013 00:27:52.989887 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8ee809b8-7189-45c9-826e-d61803496b70-var-lib-calico\") pod \"calico-node-ckqwb\" (UID: \"8ee809b8-7189-45c9-826e-d61803496b70\") " pod="calico-system/calico-node-ckqwb" Oct 13 00:27:53.042455 kubelet[3373]: E1013 00:27:53.042097 3373 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b5nsr" podUID="4ed0fdc7-31ce-42c5-b3c9-46bf732b034a" Oct 13 00:27:53.055557 containerd[1881]: time="2025-10-13T00:27:53.055507228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-795dc5677b-82jfd,Uid:fb1fb7ea-aa4d-40ea-a76b-02e62d51f76a,Namespace:calico-system,Attempt:0,}" Oct 13 00:27:53.090746 kubelet[3373]: I1013 00:27:53.090710 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ed0fdc7-31ce-42c5-b3c9-46bf732b034a-kubelet-dir\") pod \"csi-node-driver-b5nsr\" (UID: \"4ed0fdc7-31ce-42c5-b3c9-46bf732b034a\") " pod="calico-system/csi-node-driver-b5nsr" Oct 13 00:27:53.090882 kubelet[3373]: I1013 00:27:53.090770 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4ed0fdc7-31ce-42c5-b3c9-46bf732b034a-socket-dir\") pod \"csi-node-driver-b5nsr\" (UID: \"4ed0fdc7-31ce-42c5-b3c9-46bf732b034a\") " pod="calico-system/csi-node-driver-b5nsr" Oct 13 00:27:53.090882 kubelet[3373]: I1013 00:27:53.090843 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msrjw\" (UniqueName: \"kubernetes.io/projected/4ed0fdc7-31ce-42c5-b3c9-46bf732b034a-kube-api-access-msrjw\") pod \"csi-node-driver-b5nsr\" (UID: \"4ed0fdc7-31ce-42c5-b3c9-46bf732b034a\") " pod="calico-system/csi-node-driver-b5nsr" Oct 13 00:27:53.090882 kubelet[3373]: I1013 00:27:53.090865 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4ed0fdc7-31ce-42c5-b3c9-46bf732b034a-registration-dir\") pod \"csi-node-driver-b5nsr\" (UID: \"4ed0fdc7-31ce-42c5-b3c9-46bf732b034a\") " pod="calico-system/csi-node-driver-b5nsr" Oct 13 00:27:53.090960 kubelet[3373]: I1013 00:27:53.090894 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4ed0fdc7-31ce-42c5-b3c9-46bf732b034a-varrun\") pod \"csi-node-driver-b5nsr\" (UID: \"4ed0fdc7-31ce-42c5-b3c9-46bf732b034a\") " pod="calico-system/csi-node-driver-b5nsr" Oct 13 00:27:53.096907 kubelet[3373]: E1013 00:27:53.096876 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:53.096907 kubelet[3373]: W1013 00:27:53.096898 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:53.097020 kubelet[3373]: E1013 00:27:53.096925 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:53.108910 containerd[1881]: time="2025-10-13T00:27:53.108868244Z" level=info msg="connecting to shim 8ee74028e756b0fc2c2c647921315db941187fadc05e750b3ad929e76593bb0f" address="unix:///run/containerd/s/68cc931120f4c7d4753f61b91ec147eb3f988d71403b8b57ec7db616e3663ac8" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:27:53.122913 kubelet[3373]: E1013 00:27:53.122802 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:53.122913 kubelet[3373]: W1013 00:27:53.122847 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:53.122913 kubelet[3373]: E1013 00:27:53.122868 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:53.147125 systemd[1]: Started cri-containerd-8ee74028e756b0fc2c2c647921315db941187fadc05e750b3ad929e76593bb0f.scope - libcontainer container 8ee74028e756b0fc2c2c647921315db941187fadc05e750b3ad929e76593bb0f. Oct 13 00:27:53.193175 containerd[1881]: time="2025-10-13T00:27:53.193058785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-795dc5677b-82jfd,Uid:fb1fb7ea-aa4d-40ea-a76b-02e62d51f76a,Namespace:calico-system,Attempt:0,} returns sandbox id \"8ee74028e756b0fc2c2c647921315db941187fadc05e750b3ad929e76593bb0f\"" Oct 13 00:27:53.195095 kubelet[3373]: E1013 00:27:53.195064 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:53.195095 kubelet[3373]: W1013 00:27:53.195089 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:53.195297 kubelet[3373]: E1013 00:27:53.195113 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:53.195497 kubelet[3373]: E1013 00:27:53.195362 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:53.195497 kubelet[3373]: W1013 00:27:53.195472 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:53.195497 kubelet[3373]: E1013 00:27:53.195487 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:53.196690 kubelet[3373]: E1013 00:27:53.196661 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:53.196690 kubelet[3373]: W1013 00:27:53.196676 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:53.196776 kubelet[3373]: E1013 00:27:53.196697 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:53.196882 kubelet[3373]: E1013 00:27:53.196864 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:53.196882 kubelet[3373]: W1013 00:27:53.196876 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:53.196882 kubelet[3373]: E1013 00:27:53.196898 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:53.197127 containerd[1881]: time="2025-10-13T00:27:53.197034864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Oct 13 00:27:53.197190 kubelet[3373]: E1013 00:27:53.197168 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:53.197190 kubelet[3373]: W1013 00:27:53.197185 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:53.197348 kubelet[3373]: E1013 00:27:53.197330 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:53.197512 kubelet[3373]: E1013 00:27:53.197495 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:53.197512 kubelet[3373]: W1013 00:27:53.197508 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:53.197634 kubelet[3373]: E1013 00:27:53.197575 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:53.198047 kubelet[3373]: E1013 00:27:53.198015 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:53.198047 kubelet[3373]: W1013 00:27:53.198030 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:53.198047 kubelet[3373]: E1013 00:27:53.198043 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:53.198494 kubelet[3373]: E1013 00:27:53.198375 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:53.198494 kubelet[3373]: W1013 00:27:53.198389 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:53.198494 kubelet[3373]: E1013 00:27:53.198401 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:53.199086 kubelet[3373]: E1013 00:27:53.198610 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:53.199086 kubelet[3373]: W1013 00:27:53.198622 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:53.199086 kubelet[3373]: E1013 00:27:53.198636 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:53.199211 kubelet[3373]: E1013 00:27:53.199197 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:53.199335 kubelet[3373]: W1013 00:27:53.199320 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:53.199440 kubelet[3373]: E1013 00:27:53.199412 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:53.199599 kubelet[3373]: E1013 00:27:53.199588 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:53.199730 kubelet[3373]: W1013 00:27:53.199655 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:53.199730 kubelet[3373]: E1013 00:27:53.199691 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:53.199832 kubelet[3373]: E1013 00:27:53.199822 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:53.199948 kubelet[3373]: W1013 00:27:53.199867 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:53.199948 kubelet[3373]: E1013 00:27:53.199892 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:53.200110 kubelet[3373]: E1013 00:27:53.200101 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:53.200228 kubelet[3373]: W1013 00:27:53.200160 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:53.200228 kubelet[3373]: E1013 00:27:53.200194 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:53.200387 kubelet[3373]: E1013 00:27:53.200378 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:53.200508 kubelet[3373]: W1013 00:27:53.200436 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:53.200508 kubelet[3373]: E1013 00:27:53.200468 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:53.200621 kubelet[3373]: E1013 00:27:53.200611 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:53.200672 kubelet[3373]: W1013 00:27:53.200663 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:53.200751 kubelet[3373]: E1013 00:27:53.200727 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:53.200897 kubelet[3373]: E1013 00:27:53.200887 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:53.201064 kubelet[3373]: W1013 00:27:53.200967 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:53.201064 kubelet[3373]: E1013 00:27:53.200996 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:53.201304 kubelet[3373]: E1013 00:27:53.201230 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:53.201304 kubelet[3373]: W1013 00:27:53.201241 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:53.201304 kubelet[3373]: E1013 00:27:53.201262 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:53.201473 kubelet[3373]: E1013 00:27:53.201464 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:53.201619 kubelet[3373]: W1013 00:27:53.201513 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:53.201619 kubelet[3373]: E1013 00:27:53.201541 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:53.201778 kubelet[3373]: E1013 00:27:53.201766 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:53.201833 kubelet[3373]: W1013 00:27:53.201822 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:53.201925 kubelet[3373]: E1013 00:27:53.201890 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:53.202178 kubelet[3373]: E1013 00:27:53.202128 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:53.202304 kubelet[3373]: W1013 00:27:53.202289 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:53.202401 kubelet[3373]: E1013 00:27:53.202375 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:53.202552 kubelet[3373]: E1013 00:27:53.202541 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:53.202615 kubelet[3373]: W1013 00:27:53.202604 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:53.202800 kubelet[3373]: E1013 00:27:53.202790 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:53.202956 kubelet[3373]: W1013 00:27:53.202851 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:53.203079 kubelet[3373]: E1013 00:27:53.203061 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:53.203114 kubelet[3373]: E1013 00:27:53.203095 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:53.203989 kubelet[3373]: E1013 00:27:53.203173 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:53.204097 kubelet[3373]: W1013 00:27:53.204079 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:53.204245 kubelet[3373]: E1013 00:27:53.204220 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:53.204478 kubelet[3373]: E1013 00:27:53.204378 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:53.204478 kubelet[3373]: W1013 00:27:53.204389 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:53.204478 kubelet[3373]: E1013 00:27:53.204413 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:53.204669 kubelet[3373]: E1013 00:27:53.204656 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:53.204720 kubelet[3373]: W1013 00:27:53.204711 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:53.204786 kubelet[3373]: E1013 00:27:53.204760 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:53.211772 kubelet[3373]: E1013 00:27:53.211747 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:53.211772 kubelet[3373]: W1013 00:27:53.211768 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:53.211862 kubelet[3373]: E1013 00:27:53.211785 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:53.244961 containerd[1881]: time="2025-10-13T00:27:53.244654421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ckqwb,Uid:8ee809b8-7189-45c9-826e-d61803496b70,Namespace:calico-system,Attempt:0,}" Oct 13 00:27:53.302006 containerd[1881]: time="2025-10-13T00:27:53.301871640Z" level=info msg="connecting to shim de0bd2ee6cc64675c3f504c7774fcccbc8d19a326b69a065fb79abafb24cd5ed" address="unix:///run/containerd/s/0f113b1674d89fcc638c3e98e5fc118396eb382821d5ffd7884521f0fdde2c07" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:27:53.326085 systemd[1]: Started cri-containerd-de0bd2ee6cc64675c3f504c7774fcccbc8d19a326b69a065fb79abafb24cd5ed.scope - libcontainer container de0bd2ee6cc64675c3f504c7774fcccbc8d19a326b69a065fb79abafb24cd5ed. Oct 13 00:27:53.351126 containerd[1881]: time="2025-10-13T00:27:53.351085956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ckqwb,Uid:8ee809b8-7189-45c9-826e-d61803496b70,Namespace:calico-system,Attempt:0,} returns sandbox id \"de0bd2ee6cc64675c3f504c7774fcccbc8d19a326b69a065fb79abafb24cd5ed\"" Oct 13 00:27:54.715536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4225164319.mount: Deactivated successfully. Oct 13 00:27:55.060733 kubelet[3373]: E1013 00:27:55.060511 3373 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b5nsr" podUID="4ed0fdc7-31ce-42c5-b3c9-46bf732b034a" Oct 13 00:27:55.388548 containerd[1881]: time="2025-10-13T00:27:55.388202859Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:27:55.391216 containerd[1881]: time="2025-10-13T00:27:55.391183432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Oct 13 00:27:55.393931 containerd[1881]: time="2025-10-13T00:27:55.393894419Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:27:55.398451 containerd[1881]: time="2025-10-13T00:27:55.398410372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:27:55.399304 containerd[1881]: time="2025-10-13T00:27:55.399190838Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.202129477s" Oct 13 00:27:55.399304 containerd[1881]: time="2025-10-13T00:27:55.399224463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Oct 13 00:27:55.400272 containerd[1881]: time="2025-10-13T00:27:55.399880909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Oct 13 00:27:55.409868 containerd[1881]: time="2025-10-13T00:27:55.409326244Z" level=info msg="CreateContainer within sandbox \"8ee74028e756b0fc2c2c647921315db941187fadc05e750b3ad929e76593bb0f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 13 00:27:55.433032 containerd[1881]: time="2025-10-13T00:27:55.432495265Z" level=info msg="Container 8157e21e1ff80d9291fdb2867231459523da06056d66e25d895a158bab8e4159: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:27:55.435152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2436160669.mount: Deactivated successfully. Oct 13 00:27:55.452746 containerd[1881]: time="2025-10-13T00:27:55.452718219Z" level=info msg="CreateContainer within sandbox \"8ee74028e756b0fc2c2c647921315db941187fadc05e750b3ad929e76593bb0f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8157e21e1ff80d9291fdb2867231459523da06056d66e25d895a158bab8e4159\"" Oct 13 00:27:55.454296 containerd[1881]: time="2025-10-13T00:27:55.454262095Z" level=info msg="StartContainer for \"8157e21e1ff80d9291fdb2867231459523da06056d66e25d895a158bab8e4159\"" Oct 13 00:27:55.455254 containerd[1881]: time="2025-10-13T00:27:55.455233920Z" level=info msg="connecting to shim 8157e21e1ff80d9291fdb2867231459523da06056d66e25d895a158bab8e4159" address="unix:///run/containerd/s/68cc931120f4c7d4753f61b91ec147eb3f988d71403b8b57ec7db616e3663ac8" protocol=ttrpc version=3 Oct 13 00:27:55.470093 systemd[1]: Started cri-containerd-8157e21e1ff80d9291fdb2867231459523da06056d66e25d895a158bab8e4159.scope - libcontainer container 8157e21e1ff80d9291fdb2867231459523da06056d66e25d895a158bab8e4159. Oct 13 00:27:55.511255 containerd[1881]: time="2025-10-13T00:27:55.511196424Z" level=info msg="StartContainer for \"8157e21e1ff80d9291fdb2867231459523da06056d66e25d895a158bab8e4159\" returns successfully" Oct 13 00:27:56.194504 kubelet[3373]: E1013 00:27:56.194472 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.194995 kubelet[3373]: W1013 00:27:56.194839 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.194995 kubelet[3373]: E1013 00:27:56.194870 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.195261 kubelet[3373]: E1013 00:27:56.195066 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.195261 kubelet[3373]: W1013 00:27:56.195075 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.195261 kubelet[3373]: E1013 00:27:56.195117 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.195473 kubelet[3373]: E1013 00:27:56.195403 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.195473 kubelet[3373]: W1013 00:27:56.195415 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.195473 kubelet[3373]: E1013 00:27:56.195427 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.195698 kubelet[3373]: E1013 00:27:56.195688 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.195845 kubelet[3373]: W1013 00:27:56.195747 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.195845 kubelet[3373]: E1013 00:27:56.195761 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.196062 kubelet[3373]: E1013 00:27:56.196051 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.196156 kubelet[3373]: W1013 00:27:56.196145 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.196308 kubelet[3373]: E1013 00:27:56.196226 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.196507 kubelet[3373]: E1013 00:27:56.196453 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.196507 kubelet[3373]: W1013 00:27:56.196463 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.196507 kubelet[3373]: E1013 00:27:56.196472 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.196746 kubelet[3373]: E1013 00:27:56.196698 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.196746 kubelet[3373]: W1013 00:27:56.196708 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.196746 kubelet[3373]: E1013 00:27:56.196717 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.197033 kubelet[3373]: E1013 00:27:56.196965 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.197033 kubelet[3373]: W1013 00:27:56.196975 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.197033 kubelet[3373]: E1013 00:27:56.196984 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.197325 kubelet[3373]: E1013 00:27:56.197237 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.197325 kubelet[3373]: W1013 00:27:56.197249 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.197325 kubelet[3373]: E1013 00:27:56.197257 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.197455 kubelet[3373]: E1013 00:27:56.197445 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.197574 kubelet[3373]: W1013 00:27:56.197500 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.197574 kubelet[3373]: E1013 00:27:56.197510 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.197722 kubelet[3373]: E1013 00:27:56.197713 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.197784 kubelet[3373]: W1013 00:27:56.197769 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.197901 kubelet[3373]: E1013 00:27:56.197833 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.198071 kubelet[3373]: E1013 00:27:56.198062 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.198145 kubelet[3373]: W1013 00:27:56.198136 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.198198 kubelet[3373]: E1013 00:27:56.198187 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.198461 kubelet[3373]: E1013 00:27:56.198384 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.198461 kubelet[3373]: W1013 00:27:56.198392 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.198461 kubelet[3373]: E1013 00:27:56.198401 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.198700 kubelet[3373]: E1013 00:27:56.198626 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.198700 kubelet[3373]: W1013 00:27:56.198635 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.198700 kubelet[3373]: E1013 00:27:56.198644 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.198931 kubelet[3373]: E1013 00:27:56.198870 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.198931 kubelet[3373]: W1013 00:27:56.198879 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.198931 kubelet[3373]: E1013 00:27:56.198887 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.218188 kubelet[3373]: E1013 00:27:56.218164 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.218188 kubelet[3373]: W1013 00:27:56.218182 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.218188 kubelet[3373]: E1013 00:27:56.218195 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.218362 kubelet[3373]: E1013 00:27:56.218324 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.218362 kubelet[3373]: W1013 00:27:56.218330 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.218362 kubelet[3373]: E1013 00:27:56.218341 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.218483 kubelet[3373]: E1013 00:27:56.218476 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.218483 kubelet[3373]: W1013 00:27:56.218483 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.218535 kubelet[3373]: E1013 00:27:56.218492 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.218619 kubelet[3373]: E1013 00:27:56.218608 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.218619 kubelet[3373]: W1013 00:27:56.218615 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.218669 kubelet[3373]: E1013 00:27:56.218624 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.218719 kubelet[3373]: E1013 00:27:56.218711 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.218719 kubelet[3373]: W1013 00:27:56.218718 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.218752 kubelet[3373]: E1013 00:27:56.218723 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.218803 kubelet[3373]: E1013 00:27:56.218796 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.218803 kubelet[3373]: W1013 00:27:56.218803 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.218864 kubelet[3373]: E1013 00:27:56.218810 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.218914 kubelet[3373]: E1013 00:27:56.218903 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.218914 kubelet[3373]: W1013 00:27:56.218909 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.218914 kubelet[3373]: E1013 00:27:56.218914 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.219154 kubelet[3373]: E1013 00:27:56.219139 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.219309 kubelet[3373]: W1013 00:27:56.219213 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.219309 kubelet[3373]: E1013 00:27:56.219232 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.219445 kubelet[3373]: E1013 00:27:56.219435 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.219524 kubelet[3373]: W1013 00:27:56.219514 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.219603 kubelet[3373]: E1013 00:27:56.219589 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.219802 kubelet[3373]: E1013 00:27:56.219792 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.219917 kubelet[3373]: W1013 00:27:56.219869 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.219917 kubelet[3373]: E1013 00:27:56.219893 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.220227 kubelet[3373]: E1013 00:27:56.220132 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.220227 kubelet[3373]: W1013 00:27:56.220152 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.220227 kubelet[3373]: E1013 00:27:56.220168 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.220489 kubelet[3373]: E1013 00:27:56.220438 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.220632 kubelet[3373]: W1013 00:27:56.220551 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.220632 kubelet[3373]: E1013 00:27:56.220576 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.220937 kubelet[3373]: E1013 00:27:56.220814 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.220937 kubelet[3373]: W1013 00:27:56.220824 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.220937 kubelet[3373]: E1013 00:27:56.220839 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.221038 kubelet[3373]: E1013 00:27:56.221006 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.221038 kubelet[3373]: W1013 00:27:56.221014 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.221038 kubelet[3373]: E1013 00:27:56.221021 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.221297 kubelet[3373]: E1013 00:27:56.221107 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.221297 kubelet[3373]: W1013 00:27:56.221115 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.221297 kubelet[3373]: E1013 00:27:56.221120 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.221297 kubelet[3373]: E1013 00:27:56.221218 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.221297 kubelet[3373]: W1013 00:27:56.221222 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.221297 kubelet[3373]: E1013 00:27:56.221227 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.221526 kubelet[3373]: E1013 00:27:56.221514 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.221580 kubelet[3373]: W1013 00:27:56.221570 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.221622 kubelet[3373]: E1013 00:27:56.221615 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.221812 kubelet[3373]: E1013 00:27:56.221781 3373 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 00:27:56.221812 kubelet[3373]: W1013 00:27:56.221790 3373 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 00:27:56.221812 kubelet[3373]: E1013 00:27:56.221798 3373 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 00:27:56.716908 containerd[1881]: time="2025-10-13T00:27:56.716809075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:27:56.719447 containerd[1881]: time="2025-10-13T00:27:56.719415522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Oct 13 00:27:56.722371 containerd[1881]: time="2025-10-13T00:27:56.722340893Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:27:56.726578 containerd[1881]: time="2025-10-13T00:27:56.726546779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:27:56.726954 containerd[1881]: time="2025-10-13T00:27:56.726783971Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.326627381s" Oct 13 00:27:56.726954 containerd[1881]: time="2025-10-13T00:27:56.726810444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Oct 13 00:27:56.729825 containerd[1881]: time="2025-10-13T00:27:56.729792433Z" level=info msg="CreateContainer within sandbox \"de0bd2ee6cc64675c3f504c7774fcccbc8d19a326b69a065fb79abafb24cd5ed\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 13 00:27:56.754611 containerd[1881]: time="2025-10-13T00:27:56.754567900Z" level=info msg="Container 86db5e5204af5500c68f05a67cc280d6ee1e96e9c05e564b1816350fd2596600: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:27:56.772031 containerd[1881]: time="2025-10-13T00:27:56.771993800Z" level=info msg="CreateContainer within sandbox \"de0bd2ee6cc64675c3f504c7774fcccbc8d19a326b69a065fb79abafb24cd5ed\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"86db5e5204af5500c68f05a67cc280d6ee1e96e9c05e564b1816350fd2596600\"" Oct 13 00:27:56.772392 containerd[1881]: time="2025-10-13T00:27:56.772362989Z" level=info msg="StartContainer for \"86db5e5204af5500c68f05a67cc280d6ee1e96e9c05e564b1816350fd2596600\"" Oct 13 00:27:56.773898 containerd[1881]: time="2025-10-13T00:27:56.773840662Z" level=info msg="connecting to shim 86db5e5204af5500c68f05a67cc280d6ee1e96e9c05e564b1816350fd2596600" address="unix:///run/containerd/s/0f113b1674d89fcc638c3e98e5fc118396eb382821d5ffd7884521f0fdde2c07" protocol=ttrpc version=3 Oct 13 00:27:56.795060 systemd[1]: Started cri-containerd-86db5e5204af5500c68f05a67cc280d6ee1e96e9c05e564b1816350fd2596600.scope - libcontainer container 86db5e5204af5500c68f05a67cc280d6ee1e96e9c05e564b1816350fd2596600. Oct 13 00:27:56.823231 containerd[1881]: time="2025-10-13T00:27:56.823179287Z" level=info msg="StartContainer for \"86db5e5204af5500c68f05a67cc280d6ee1e96e9c05e564b1816350fd2596600\" returns successfully" Oct 13 00:27:56.828846 systemd[1]: cri-containerd-86db5e5204af5500c68f05a67cc280d6ee1e96e9c05e564b1816350fd2596600.scope: Deactivated successfully. Oct 13 00:27:56.834429 containerd[1881]: time="2025-10-13T00:27:56.834271701Z" level=info msg="received exit event container_id:\"86db5e5204af5500c68f05a67cc280d6ee1e96e9c05e564b1816350fd2596600\" id:\"86db5e5204af5500c68f05a67cc280d6ee1e96e9c05e564b1816350fd2596600\" pid:3989 exited_at:{seconds:1760315276 nanos:833339589}" Oct 13 00:27:56.835281 containerd[1881]: time="2025-10-13T00:27:56.835258854Z" level=info msg="TaskExit event in podsandbox handler container_id:\"86db5e5204af5500c68f05a67cc280d6ee1e96e9c05e564b1816350fd2596600\" id:\"86db5e5204af5500c68f05a67cc280d6ee1e96e9c05e564b1816350fd2596600\" pid:3989 exited_at:{seconds:1760315276 nanos:833339589}" Oct 13 00:27:56.849991 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86db5e5204af5500c68f05a67cc280d6ee1e96e9c05e564b1816350fd2596600-rootfs.mount: Deactivated successfully. Oct 13 00:27:57.060230 kubelet[3373]: E1013 00:27:57.060164 3373 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b5nsr" podUID="4ed0fdc7-31ce-42c5-b3c9-46bf732b034a" Oct 13 00:27:57.154648 kubelet[3373]: I1013 00:27:57.154606 3373 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 00:27:57.174840 kubelet[3373]: I1013 00:27:57.174476 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-795dc5677b-82jfd" podStartSLOduration=2.971080657 podStartE2EDuration="5.174462904s" podCreationTimestamp="2025-10-13 00:27:52 +0000 UTC" firstStartedPulling="2025-10-13 00:27:53.196409243 +0000 UTC m=+25.257136569" lastFinishedPulling="2025-10-13 00:27:55.39979149 +0000 UTC m=+27.460518816" observedRunningTime="2025-10-13 00:27:56.163841462 +0000 UTC m=+28.224568788" watchObservedRunningTime="2025-10-13 00:27:57.174462904 +0000 UTC m=+29.235190230" Oct 13 00:27:58.160729 containerd[1881]: time="2025-10-13T00:27:58.160322606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Oct 13 00:27:59.060250 kubelet[3373]: E1013 00:27:59.060183 3373 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b5nsr" podUID="4ed0fdc7-31ce-42c5-b3c9-46bf732b034a" Oct 13 00:28:01.060411 kubelet[3373]: E1013 00:28:01.060347 3373 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b5nsr" podUID="4ed0fdc7-31ce-42c5-b3c9-46bf732b034a" Oct 13 00:28:03.060642 kubelet[3373]: E1013 00:28:03.060591 3373 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b5nsr" podUID="4ed0fdc7-31ce-42c5-b3c9-46bf732b034a" Oct 13 00:28:03.545155 containerd[1881]: time="2025-10-13T00:28:03.545099933Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:28:03.548227 containerd[1881]: time="2025-10-13T00:28:03.548086961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Oct 13 00:28:03.551980 containerd[1881]: time="2025-10-13T00:28:03.551841823Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:28:03.556786 containerd[1881]: time="2025-10-13T00:28:03.556731722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:28:03.557388 containerd[1881]: time="2025-10-13T00:28:03.557203298Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 5.396845283s" Oct 13 00:28:03.557388 containerd[1881]: time="2025-10-13T00:28:03.557230219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Oct 13 00:28:03.560958 containerd[1881]: time="2025-10-13T00:28:03.560189814Z" level=info msg="CreateContainer within sandbox \"de0bd2ee6cc64675c3f504c7774fcccbc8d19a326b69a065fb79abafb24cd5ed\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 13 00:28:03.581788 containerd[1881]: time="2025-10-13T00:28:03.581740856Z" level=info msg="Container 3cab43ec8a8c7b25c57580854c0a5e2a4db9db5bc953013487497fc48c4ed402: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:28:03.601022 containerd[1881]: time="2025-10-13T00:28:03.600972660Z" level=info msg="CreateContainer within sandbox \"de0bd2ee6cc64675c3f504c7774fcccbc8d19a326b69a065fb79abafb24cd5ed\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3cab43ec8a8c7b25c57580854c0a5e2a4db9db5bc953013487497fc48c4ed402\"" Oct 13 00:28:03.601542 containerd[1881]: time="2025-10-13T00:28:03.601520399Z" level=info msg="StartContainer for \"3cab43ec8a8c7b25c57580854c0a5e2a4db9db5bc953013487497fc48c4ed402\"" Oct 13 00:28:03.603667 containerd[1881]: time="2025-10-13T00:28:03.603639326Z" level=info msg="connecting to shim 3cab43ec8a8c7b25c57580854c0a5e2a4db9db5bc953013487497fc48c4ed402" address="unix:///run/containerd/s/0f113b1674d89fcc638c3e98e5fc118396eb382821d5ffd7884521f0fdde2c07" protocol=ttrpc version=3 Oct 13 00:28:03.621093 systemd[1]: Started cri-containerd-3cab43ec8a8c7b25c57580854c0a5e2a4db9db5bc953013487497fc48c4ed402.scope - libcontainer container 3cab43ec8a8c7b25c57580854c0a5e2a4db9db5bc953013487497fc48c4ed402. Oct 13 00:28:03.657411 containerd[1881]: time="2025-10-13T00:28:03.657353757Z" level=info msg="StartContainer for \"3cab43ec8a8c7b25c57580854c0a5e2a4db9db5bc953013487497fc48c4ed402\" returns successfully" Oct 13 00:28:05.060223 kubelet[3373]: E1013 00:28:05.060164 3373 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b5nsr" podUID="4ed0fdc7-31ce-42c5-b3c9-46bf732b034a" Oct 13 00:28:07.060914 kubelet[3373]: E1013 00:28:07.060855 3373 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b5nsr" podUID="4ed0fdc7-31ce-42c5-b3c9-46bf732b034a" Oct 13 00:28:09.392865 kubelet[3373]: I1013 00:28:08.032226 3373 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 00:28:09.392865 kubelet[3373]: E1013 00:28:09.060438 3373 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b5nsr" podUID="4ed0fdc7-31ce-42c5-b3c9-46bf732b034a" Oct 13 00:28:11.190339 kubelet[3373]: E1013 00:28:11.060033 3373 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b5nsr" podUID="4ed0fdc7-31ce-42c5-b3c9-46bf732b034a" Oct 13 00:28:13.060488 kubelet[3373]: E1013 00:28:13.060430 3373 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b5nsr" podUID="4ed0fdc7-31ce-42c5-b3c9-46bf732b034a" Oct 13 00:28:13.645254 containerd[1881]: time="2025-10-13T00:28:13.645157454Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 00:28:13.648039 systemd[1]: cri-containerd-3cab43ec8a8c7b25c57580854c0a5e2a4db9db5bc953013487497fc48c4ed402.scope: Deactivated successfully. Oct 13 00:28:13.648433 systemd[1]: cri-containerd-3cab43ec8a8c7b25c57580854c0a5e2a4db9db5bc953013487497fc48c4ed402.scope: Consumed 314ms CPU time, 186.7M memory peak, 165.8M written to disk. Oct 13 00:28:13.649446 containerd[1881]: time="2025-10-13T00:28:13.649412709Z" level=info msg="received exit event container_id:\"3cab43ec8a8c7b25c57580854c0a5e2a4db9db5bc953013487497fc48c4ed402\" id:\"3cab43ec8a8c7b25c57580854c0a5e2a4db9db5bc953013487497fc48c4ed402\" pid:4047 exited_at:{seconds:1760315293 nanos:648387074}" Oct 13 00:28:13.649606 containerd[1881]: time="2025-10-13T00:28:13.649516936Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3cab43ec8a8c7b25c57580854c0a5e2a4db9db5bc953013487497fc48c4ed402\" id:\"3cab43ec8a8c7b25c57580854c0a5e2a4db9db5bc953013487497fc48c4ed402\" pid:4047 exited_at:{seconds:1760315293 nanos:648387074}" Oct 13 00:28:13.666352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cab43ec8a8c7b25c57580854c0a5e2a4db9db5bc953013487497fc48c4ed402-rootfs.mount: Deactivated successfully. Oct 13 00:28:13.711523 kubelet[3373]: I1013 00:28:13.711341 3373 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 13 00:28:14.885634 kubelet[3373]: W1013 00:28:13.758001 3373 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4459.1.0-a-27183f81a1" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4459.1.0-a-27183f81a1' and this object Oct 13 00:28:14.885634 kubelet[3373]: E1013 00:28:13.758035 3373 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4459.1.0-a-27183f81a1\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4459.1.0-a-27183f81a1' and this object" logger="UnhandledError" Oct 13 00:28:14.885634 kubelet[3373]: W1013 00:28:13.761860 3373 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4459.1.0-a-27183f81a1" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4459.1.0-a-27183f81a1' and this object Oct 13 00:28:14.885634 kubelet[3373]: E1013 00:28:13.761883 3373 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4459.1.0-a-27183f81a1\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4459.1.0-a-27183f81a1' and this object" logger="UnhandledError" Oct 13 00:28:13.750420 systemd[1]: Created slice kubepods-burstable-pod6c8898f3_e8e8_4ac1_bc62_12aa1248ba56.slice - libcontainer container kubepods-burstable-pod6c8898f3_e8e8_4ac1_bc62_12aa1248ba56.slice. Oct 13 00:28:14.886600 kubelet[3373]: I1013 00:28:13.824072 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgq5p\" (UniqueName: \"kubernetes.io/projected/6541987e-4c6e-48c0-877c-85d5d35cabc1-kube-api-access-hgq5p\") pod \"calico-apiserver-745499777d-j7f7t\" (UID: \"6541987e-4c6e-48c0-877c-85d5d35cabc1\") " pod="calico-apiserver/calico-apiserver-745499777d-j7f7t" Oct 13 00:28:14.886600 kubelet[3373]: I1013 00:28:13.824104 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c5hf\" (UniqueName: \"kubernetes.io/projected/ed6f0101-b75c-4fad-8e70-d80f4b301375-kube-api-access-9c5hf\") pod \"whisker-c88868d59-krbxb\" (UID: \"ed6f0101-b75c-4fad-8e70-d80f4b301375\") " pod="calico-system/whisker-c88868d59-krbxb" Oct 13 00:28:14.886600 kubelet[3373]: I1013 00:28:13.824120 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt6kb\" (UniqueName: \"kubernetes.io/projected/dd6494fc-6077-4e51-941e-0d0f0b6a8344-kube-api-access-mt6kb\") pod \"calico-apiserver-6d4d7db98f-gx5fl\" (UID: \"dd6494fc-6077-4e51-941e-0d0f0b6a8344\") " pod="calico-apiserver/calico-apiserver-6d4d7db98f-gx5fl" Oct 13 00:28:14.886600 kubelet[3373]: I1013 00:28:13.824132 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a7eef07-5d01-44a9-aad7-187883c09c3b-tigera-ca-bundle\") pod \"calico-kube-controllers-844f59d67c-9hbnj\" (UID: \"8a7eef07-5d01-44a9-aad7-187883c09c3b\") " pod="calico-system/calico-kube-controllers-844f59d67c-9hbnj" Oct 13 00:28:14.886600 kubelet[3373]: I1013 00:28:13.824173 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87870071-4bbc-43b3-a4d4-ede5124d2669-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-j7cfn\" (UID: \"87870071-4bbc-43b3-a4d4-ede5124d2669\") " pod="calico-system/goldmane-54d579b49d-j7cfn" Oct 13 00:28:13.760998 systemd[1]: Created slice kubepods-burstable-pod7ce906df_24f0_40c1_9e22_eb83d0c34f2f.slice - libcontainer container kubepods-burstable-pod7ce906df_24f0_40c1_9e22_eb83d0c34f2f.slice. Oct 13 00:28:14.886720 kubelet[3373]: I1013 00:28:13.824256 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql6t8\" (UniqueName: \"kubernetes.io/projected/d5df1242-3535-480f-83ed-f48fbf6f9e8f-kube-api-access-ql6t8\") pod \"calico-apiserver-6d4d7db98f-ctlg8\" (UID: \"d5df1242-3535-480f-83ed-f48fbf6f9e8f\") " pod="calico-apiserver/calico-apiserver-6d4d7db98f-ctlg8" Oct 13 00:28:14.886720 kubelet[3373]: I1013 00:28:13.824373 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ed6f0101-b75c-4fad-8e70-d80f4b301375-whisker-backend-key-pair\") pod \"whisker-c88868d59-krbxb\" (UID: \"ed6f0101-b75c-4fad-8e70-d80f4b301375\") " pod="calico-system/whisker-c88868d59-krbxb" Oct 13 00:28:14.886720 kubelet[3373]: I1013 00:28:13.824392 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87870071-4bbc-43b3-a4d4-ede5124d2669-config\") pod \"goldmane-54d579b49d-j7cfn\" (UID: \"87870071-4bbc-43b3-a4d4-ede5124d2669\") " pod="calico-system/goldmane-54d579b49d-j7cfn" Oct 13 00:28:14.886720 kubelet[3373]: I1013 00:28:13.824408 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ce906df-24f0-40c1-9e22-eb83d0c34f2f-config-volume\") pod \"coredns-668d6bf9bc-mx5qd\" (UID: \"7ce906df-24f0-40c1-9e22-eb83d0c34f2f\") " pod="kube-system/coredns-668d6bf9bc-mx5qd" Oct 13 00:28:14.886720 kubelet[3373]: I1013 00:28:13.824420 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mgfk\" (UniqueName: \"kubernetes.io/projected/8a7eef07-5d01-44a9-aad7-187883c09c3b-kube-api-access-7mgfk\") pod \"calico-kube-controllers-844f59d67c-9hbnj\" (UID: \"8a7eef07-5d01-44a9-aad7-187883c09c3b\") " pod="calico-system/calico-kube-controllers-844f59d67c-9hbnj" Oct 13 00:28:13.769365 systemd[1]: Created slice kubepods-besteffort-podd5df1242_3535_480f_83ed_f48fbf6f9e8f.slice - libcontainer container kubepods-besteffort-podd5df1242_3535_480f_83ed_f48fbf6f9e8f.slice. Oct 13 00:28:14.886832 kubelet[3373]: I1013 00:28:13.824608 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/87870071-4bbc-43b3-a4d4-ede5124d2669-goldmane-key-pair\") pod \"goldmane-54d579b49d-j7cfn\" (UID: \"87870071-4bbc-43b3-a4d4-ede5124d2669\") " pod="calico-system/goldmane-54d579b49d-j7cfn" Oct 13 00:28:14.886832 kubelet[3373]: I1013 00:28:13.824622 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d5df1242-3535-480f-83ed-f48fbf6f9e8f-calico-apiserver-certs\") pod \"calico-apiserver-6d4d7db98f-ctlg8\" (UID: \"d5df1242-3535-480f-83ed-f48fbf6f9e8f\") " pod="calico-apiserver/calico-apiserver-6d4d7db98f-ctlg8" Oct 13 00:28:14.886832 kubelet[3373]: I1013 00:28:13.824639 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dd6494fc-6077-4e51-941e-0d0f0b6a8344-calico-apiserver-certs\") pod \"calico-apiserver-6d4d7db98f-gx5fl\" (UID: \"dd6494fc-6077-4e51-941e-0d0f0b6a8344\") " pod="calico-apiserver/calico-apiserver-6d4d7db98f-gx5fl" Oct 13 00:28:14.886832 kubelet[3373]: I1013 00:28:13.824661 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9w9v\" (UniqueName: \"kubernetes.io/projected/87870071-4bbc-43b3-a4d4-ede5124d2669-kube-api-access-j9w9v\") pod \"goldmane-54d579b49d-j7cfn\" (UID: \"87870071-4bbc-43b3-a4d4-ede5124d2669\") " pod="calico-system/goldmane-54d579b49d-j7cfn" Oct 13 00:28:14.886832 kubelet[3373]: I1013 00:28:13.824674 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fbm6\" (UniqueName: \"kubernetes.io/projected/7ce906df-24f0-40c1-9e22-eb83d0c34f2f-kube-api-access-5fbm6\") pod \"coredns-668d6bf9bc-mx5qd\" (UID: \"7ce906df-24f0-40c1-9e22-eb83d0c34f2f\") " pod="kube-system/coredns-668d6bf9bc-mx5qd" Oct 13 00:28:13.776204 systemd[1]: Created slice kubepods-besteffort-pod8a7eef07_5d01_44a9_aad7_187883c09c3b.slice - libcontainer container kubepods-besteffort-pod8a7eef07_5d01_44a9_aad7_187883c09c3b.slice. Oct 13 00:28:14.886948 kubelet[3373]: I1013 00:28:13.824689 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4cc5\" (UniqueName: \"kubernetes.io/projected/6c8898f3-e8e8-4ac1-bc62-12aa1248ba56-kube-api-access-h4cc5\") pod \"coredns-668d6bf9bc-7lknh\" (UID: \"6c8898f3-e8e8-4ac1-bc62-12aa1248ba56\") " pod="kube-system/coredns-668d6bf9bc-7lknh" Oct 13 00:28:14.886948 kubelet[3373]: I1013 00:28:13.824820 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed6f0101-b75c-4fad-8e70-d80f4b301375-whisker-ca-bundle\") pod \"whisker-c88868d59-krbxb\" (UID: \"ed6f0101-b75c-4fad-8e70-d80f4b301375\") " pod="calico-system/whisker-c88868d59-krbxb" Oct 13 00:28:14.886948 kubelet[3373]: I1013 00:28:13.824855 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6541987e-4c6e-48c0-877c-85d5d35cabc1-calico-apiserver-certs\") pod \"calico-apiserver-745499777d-j7f7t\" (UID: \"6541987e-4c6e-48c0-877c-85d5d35cabc1\") " pod="calico-apiserver/calico-apiserver-745499777d-j7f7t" Oct 13 00:28:14.886948 kubelet[3373]: I1013 00:28:13.824873 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c8898f3-e8e8-4ac1-bc62-12aa1248ba56-config-volume\") pod \"coredns-668d6bf9bc-7lknh\" (UID: \"6c8898f3-e8e8-4ac1-bc62-12aa1248ba56\") " pod="kube-system/coredns-668d6bf9bc-7lknh" Oct 13 00:28:13.782385 systemd[1]: Created slice kubepods-besteffort-pod87870071_4bbc_43b3_a4d4_ede5124d2669.slice - libcontainer container kubepods-besteffort-pod87870071_4bbc_43b3_a4d4_ede5124d2669.slice. Oct 13 00:28:13.789829 systemd[1]: Created slice kubepods-besteffort-pod6541987e_4c6e_48c0_877c_85d5d35cabc1.slice - libcontainer container kubepods-besteffort-pod6541987e_4c6e_48c0_877c_85d5d35cabc1.slice. Oct 13 00:28:13.793581 systemd[1]: Created slice kubepods-besteffort-poddd6494fc_6077_4e51_941e_0d0f0b6a8344.slice - libcontainer container kubepods-besteffort-poddd6494fc_6077_4e51_941e_0d0f0b6a8344.slice. Oct 13 00:28:13.799999 systemd[1]: Created slice kubepods-besteffort-poded6f0101_b75c_4fad_8e70_d80f4b301375.slice - libcontainer container kubepods-besteffort-poded6f0101_b75c_4fad_8e70_d80f4b301375.slice. Oct 13 00:28:14.926985 kubelet[3373]: E1013 00:28:14.926933 3373 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Oct 13 00:28:14.927193 kubelet[3373]: E1013 00:28:14.927032 3373 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6541987e-4c6e-48c0-877c-85d5d35cabc1-calico-apiserver-certs podName:6541987e-4c6e-48c0-877c-85d5d35cabc1 nodeName:}" failed. No retries permitted until 2025-10-13 00:28:15.427012921 +0000 UTC m=+47.487740247 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/6541987e-4c6e-48c0-877c-85d5d35cabc1-calico-apiserver-certs") pod "calico-apiserver-745499777d-j7f7t" (UID: "6541987e-4c6e-48c0-877c-85d5d35cabc1") : failed to sync secret cache: timed out waiting for the condition Oct 13 00:28:14.928442 kubelet[3373]: E1013 00:28:14.928017 3373 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Oct 13 00:28:14.928635 kubelet[3373]: E1013 00:28:14.928620 3373 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd6494fc-6077-4e51-941e-0d0f0b6a8344-calico-apiserver-certs podName:dd6494fc-6077-4e51-941e-0d0f0b6a8344 nodeName:}" failed. No retries permitted until 2025-10-13 00:28:15.42860615 +0000 UTC m=+47.489333476 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/dd6494fc-6077-4e51-941e-0d0f0b6a8344-calico-apiserver-certs") pod "calico-apiserver-6d4d7db98f-gx5fl" (UID: "dd6494fc-6077-4e51-941e-0d0f0b6a8344") : failed to sync secret cache: timed out waiting for the condition Oct 13 00:28:14.928758 kubelet[3373]: E1013 00:28:14.928557 3373 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Oct 13 00:28:14.928758 kubelet[3373]: E1013 00:28:14.928743 3373 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5df1242-3535-480f-83ed-f48fbf6f9e8f-calico-apiserver-certs podName:d5df1242-3535-480f-83ed-f48fbf6f9e8f nodeName:}" failed. No retries permitted until 2025-10-13 00:28:15.428734746 +0000 UTC m=+47.489462080 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/d5df1242-3535-480f-83ed-f48fbf6f9e8f-calico-apiserver-certs") pod "calico-apiserver-6d4d7db98f-ctlg8" (UID: "d5df1242-3535-480f-83ed-f48fbf6f9e8f") : failed to sync secret cache: timed out waiting for the condition Oct 13 00:28:14.932404 kubelet[3373]: E1013 00:28:14.932004 3373 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Oct 13 00:28:14.932404 kubelet[3373]: E1013 00:28:14.932034 3373 projected.go:194] Error preparing data for projected volume kube-api-access-mt6kb for pod calico-apiserver/calico-apiserver-6d4d7db98f-gx5fl: failed to sync configmap cache: timed out waiting for the condition Oct 13 00:28:14.932404 kubelet[3373]: E1013 00:28:14.932067 3373 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dd6494fc-6077-4e51-941e-0d0f0b6a8344-kube-api-access-mt6kb podName:dd6494fc-6077-4e51-941e-0d0f0b6a8344 nodeName:}" failed. No retries permitted until 2025-10-13 00:28:15.43205697 +0000 UTC m=+47.492784296 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mt6kb" (UniqueName: "kubernetes.io/projected/dd6494fc-6077-4e51-941e-0d0f0b6a8344-kube-api-access-mt6kb") pod "calico-apiserver-6d4d7db98f-gx5fl" (UID: "dd6494fc-6077-4e51-941e-0d0f0b6a8344") : failed to sync configmap cache: timed out waiting for the condition Oct 13 00:28:14.938080 kubelet[3373]: E1013 00:28:14.938058 3373 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Oct 13 00:28:14.938183 kubelet[3373]: E1013 00:28:14.938173 3373 projected.go:194] Error preparing data for projected volume kube-api-access-hgq5p for pod calico-apiserver/calico-apiserver-745499777d-j7f7t: failed to sync configmap cache: timed out waiting for the condition Oct 13 00:28:14.938279 kubelet[3373]: E1013 00:28:14.938269 3373 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6541987e-4c6e-48c0-877c-85d5d35cabc1-kube-api-access-hgq5p podName:6541987e-4c6e-48c0-877c-85d5d35cabc1 nodeName:}" failed. No retries permitted until 2025-10-13 00:28:15.438253882 +0000 UTC m=+47.498981208 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hgq5p" (UniqueName: "kubernetes.io/projected/6541987e-4c6e-48c0-877c-85d5d35cabc1-kube-api-access-hgq5p") pod "calico-apiserver-745499777d-j7f7t" (UID: "6541987e-4c6e-48c0-877c-85d5d35cabc1") : failed to sync configmap cache: timed out waiting for the condition Oct 13 00:28:14.939193 kubelet[3373]: E1013 00:28:14.939172 3373 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Oct 13 00:28:14.939234 kubelet[3373]: E1013 00:28:14.939196 3373 projected.go:194] Error preparing data for projected volume kube-api-access-ql6t8 for pod calico-apiserver/calico-apiserver-6d4d7db98f-ctlg8: failed to sync configmap cache: timed out waiting for the condition Oct 13 00:28:14.939234 kubelet[3373]: E1013 00:28:14.939230 3373 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d5df1242-3535-480f-83ed-f48fbf6f9e8f-kube-api-access-ql6t8 podName:d5df1242-3535-480f-83ed-f48fbf6f9e8f nodeName:}" failed. No retries permitted until 2025-10-13 00:28:15.439220186 +0000 UTC m=+47.499947520 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ql6t8" (UniqueName: "kubernetes.io/projected/d5df1242-3535-480f-83ed-f48fbf6f9e8f-kube-api-access-ql6t8") pod "calico-apiserver-6d4d7db98f-ctlg8" (UID: "d5df1242-3535-480f-83ed-f48fbf6f9e8f") : failed to sync configmap cache: timed out waiting for the condition Oct 13 00:28:15.065525 systemd[1]: Created slice kubepods-besteffort-pod4ed0fdc7_31ce_42c5_b3c9_46bf732b034a.slice - libcontainer container kubepods-besteffort-pod4ed0fdc7_31ce_42c5_b3c9_46bf732b034a.slice. Oct 13 00:28:15.067706 containerd[1881]: time="2025-10-13T00:28:15.067428291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b5nsr,Uid:4ed0fdc7-31ce-42c5-b3c9-46bf732b034a,Namespace:calico-system,Attempt:0,}" Oct 13 00:28:15.160177 containerd[1881]: time="2025-10-13T00:28:15.160043673Z" level=error msg="Failed to destroy network for sandbox \"419e42cf8a3e1b7646dba2d007e28e8afe445fb9460fab7b98db33a5d1ae05bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:28:15.163303 containerd[1881]: time="2025-10-13T00:28:15.163252389Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b5nsr,Uid:4ed0fdc7-31ce-42c5-b3c9-46bf732b034a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"419e42cf8a3e1b7646dba2d007e28e8afe445fb9460fab7b98db33a5d1ae05bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:28:15.164041 kubelet[3373]: E1013 00:28:15.163534 3373 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"419e42cf8a3e1b7646dba2d007e28e8afe445fb9460fab7b98db33a5d1ae05bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:28:15.164041 kubelet[3373]: E1013 00:28:15.163624 3373 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"419e42cf8a3e1b7646dba2d007e28e8afe445fb9460fab7b98db33a5d1ae05bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b5nsr" Oct 13 00:28:15.164041 kubelet[3373]: E1013 00:28:15.163658 3373 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"419e42cf8a3e1b7646dba2d007e28e8afe445fb9460fab7b98db33a5d1ae05bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b5nsr" Oct 13 00:28:15.164168 kubelet[3373]: E1013 00:28:15.163697 3373 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b5nsr_calico-system(4ed0fdc7-31ce-42c5-b3c9-46bf732b034a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b5nsr_calico-system(4ed0fdc7-31ce-42c5-b3c9-46bf732b034a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"419e42cf8a3e1b7646dba2d007e28e8afe445fb9460fab7b98db33a5d1ae05bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b5nsr" podUID="4ed0fdc7-31ce-42c5-b3c9-46bf732b034a" Oct 13 00:28:15.186822 containerd[1881]: time="2025-10-13T00:28:15.186744394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lknh,Uid:6c8898f3-e8e8-4ac1-bc62-12aa1248ba56,Namespace:kube-system,Attempt:0,}" Oct 13 00:28:15.191236 containerd[1881]: time="2025-10-13T00:28:15.191204935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mx5qd,Uid:7ce906df-24f0-40c1-9e22-eb83d0c34f2f,Namespace:kube-system,Attempt:0,}" Oct 13 00:28:15.193154 containerd[1881]: time="2025-10-13T00:28:15.191686352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Oct 13 00:28:15.195967 containerd[1881]: time="2025-10-13T00:28:15.195842579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-844f59d67c-9hbnj,Uid:8a7eef07-5d01-44a9-aad7-187883c09c3b,Namespace:calico-system,Attempt:0,}" Oct 13 00:28:15.198045 containerd[1881]: time="2025-10-13T00:28:15.198016860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-j7cfn,Uid:87870071-4bbc-43b3-a4d4-ede5124d2669,Namespace:calico-system,Attempt:0,}" Oct 13 00:28:15.203479 containerd[1881]: time="2025-10-13T00:28:15.203447794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c88868d59-krbxb,Uid:ed6f0101-b75c-4fad-8e70-d80f4b301375,Namespace:calico-system,Attempt:0,}" Oct 13 00:28:15.278480 containerd[1881]: time="2025-10-13T00:28:15.278412624Z" level=error msg="Failed to destroy network for sandbox \"0a1ecb898f97222aeaab5aad23b342f7ff4e5485c22c045f3ac10eb5a9dee4f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:28:15.282385 containerd[1881]: time="2025-10-13T00:28:15.282351644Z" level=error msg="Failed to destroy network for sandbox \"e60cae0d45f6b5c4b6d2b34b59ad6c4408420e7571272bdae263f0e276cc31f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:28:15.283325 containerd[1881]: time="2025-10-13T00:28:15.283214865Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mx5qd,Uid:7ce906df-24f0-40c1-9e22-eb83d0c34f2f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a1ecb898f97222aeaab5aad23b342f7ff4e5485c22c045f3ac10eb5a9dee4f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:28:15.284519 kubelet[3373]: E1013 00:28:15.283548 3373 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a1ecb898f97222aeaab5aad23b342f7ff4e5485c22c045f3ac10eb5a9dee4f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:28:15.284519 kubelet[3373]: E1013 00:28:15.283613 3373 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a1ecb898f97222aeaab5aad23b342f7ff4e5485c22c045f3ac10eb5a9dee4f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mx5qd" Oct 13 00:28:15.284519 kubelet[3373]: E1013 00:28:15.283631 3373 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a1ecb898f97222aeaab5aad23b342f7ff4e5485c22c045f3ac10eb5a9dee4f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mx5qd" Oct 13 00:28:15.284670 kubelet[3373]: E1013 00:28:15.283672 3373 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-mx5qd_kube-system(7ce906df-24f0-40c1-9e22-eb83d0c34f2f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-mx5qd_kube-system(7ce906df-24f0-40c1-9e22-eb83d0c34f2f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a1ecb898f97222aeaab5aad23b342f7ff4e5485c22c045f3ac10eb5a9dee4f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mx5qd" podUID="7ce906df-24f0-40c1-9e22-eb83d0c34f2f" Oct 13 00:28:15.287162 containerd[1881]: time="2025-10-13T00:28:15.287128964Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lknh,Uid:6c8898f3-e8e8-4ac1-bc62-12aa1248ba56,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e60cae0d45f6b5c4b6d2b34b59ad6c4408420e7571272bdae263f0e276cc31f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:28:15.287432 kubelet[3373]: E1013 00:28:15.287404 3373 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e60cae0d45f6b5c4b6d2b34b59ad6c4408420e7571272bdae263f0e276cc31f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:28:15.287489 kubelet[3373]: E1013 00:28:15.287446 3373 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e60cae0d45f6b5c4b6d2b34b59ad6c4408420e7571272bdae263f0e276cc31f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7lknh" Oct 13 00:28:15.287489 kubelet[3373]: E1013 00:28:15.287464 3373 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e60cae0d45f6b5c4b6d2b34b59ad6c4408420e7571272bdae263f0e276cc31f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7lknh" Oct 13 00:28:15.287637 kubelet[3373]: E1013 00:28:15.287611 3373 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7lknh_kube-system(6c8898f3-e8e8-4ac1-bc62-12aa1248ba56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7lknh_kube-system(6c8898f3-e8e8-4ac1-bc62-12aa1248ba56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e60cae0d45f6b5c4b6d2b34b59ad6c4408420e7571272bdae263f0e276cc31f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7lknh" podUID="6c8898f3-e8e8-4ac1-bc62-12aa1248ba56" Oct 13 00:28:15.315270 containerd[1881]: time="2025-10-13T00:28:15.315196171Z" level=error msg="Failed to destroy network for sandbox \"d69f4a175761c5f51f8eed638bd6ee41f29c4b149e8084aa4d0c44f48502c1e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:28:15.319974 containerd[1881]: time="2025-10-13T00:28:15.319903545Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c88868d59-krbxb,Uid:ed6f0101-b75c-4fad-8e70-d80f4b301375,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d69f4a175761c5f51f8eed638bd6ee41f29c4b149e8084aa4d0c44f48502c1e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:28:15.320200 kubelet[3373]: E1013 00:28:15.320168 3373 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d69f4a175761c5f51f8eed638bd6ee41f29c4b149e8084aa4d0c44f48502c1e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:28:15.320252 kubelet[3373]: E1013 00:28:15.320227 3373 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d69f4a175761c5f51f8eed638bd6ee41f29c4b149e8084aa4d0c44f48502c1e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c88868d59-krbxb" Oct 13 00:28:15.320252 kubelet[3373]: E1013 00:28:15.320245 3373 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d69f4a175761c5f51f8eed638bd6ee41f29c4b149e8084aa4d0c44f48502c1e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c88868d59-krbxb" Oct 13 00:28:15.320387 kubelet[3373]: E1013 00:28:15.320287 3373 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-c88868d59-krbxb_calico-system(ed6f0101-b75c-4fad-8e70-d80f4b301375)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-c88868d59-krbxb_calico-system(ed6f0101-b75c-4fad-8e70-d80f4b301375)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d69f4a175761c5f51f8eed638bd6ee41f29c4b149e8084aa4d0c44f48502c1e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-c88868d59-krbxb" podUID="ed6f0101-b75c-4fad-8e70-d80f4b301375" Oct 13 00:28:15.322009 containerd[1881]: time="2025-10-13T00:28:15.321972686Z" level=error msg="Failed to destroy network for sandbox \"38eb323c89e3aebcc3efc4a0982eec6dd4b1707ae0a9d48a9182f9a31235c302\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:28:15.324335 containerd[1881]: time="2025-10-13T00:28:15.324298564Z" level=error msg="Failed to destroy network for sandbox \"8c99345d586dbf22146da3672a301720d7bb4946d8d36bafacfce61ea4644fbb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:28:15.325343 containerd[1881]: time="2025-10-13T00:28:15.325308790Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-j7cfn,Uid:87870071-4bbc-43b3-a4d4-ede5124d2669,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"38eb323c89e3aebcc3efc4a0982eec6dd4b1707ae0a9d48a9182f9a31235c302\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:28:15.325648 kubelet[3373]: E1013 00:28:15.325604 3373 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38eb323c89e3aebcc3efc4a0982eec6dd4b1707ae0a9d48a9182f9a31235c302\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:28:15.325705 kubelet[3373]: E1013 00:28:15.325666 3373 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38eb323c89e3aebcc3efc4a0982eec6dd4b1707ae0a9d48a9182f9a31235c302\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-j7cfn" Oct 13 00:28:15.325705 kubelet[3373]: E1013 00:28:15.325682 3373 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38eb323c89e3aebcc3efc4a0982eec6dd4b1707ae0a9d48a9182f9a31235c302\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-j7cfn" Oct 13 00:28:15.325749 kubelet[3373]: E1013 00:28:15.325714 3373 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-j7cfn_calico-system(87870071-4bbc-43b3-a4d4-ede5124d2669)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-j7cfn_calico-system(87870071-4bbc-43b3-a4d4-ede5124d2669)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38eb323c89e3aebcc3efc4a0982eec6dd4b1707ae0a9d48a9182f9a31235c302\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-j7cfn" podUID="87870071-4bbc-43b3-a4d4-ede5124d2669" Oct 13 00:28:15.328234 containerd[1881]: time="2025-10-13T00:28:15.328192567Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-844f59d67c-9hbnj,Uid:8a7eef07-5d01-44a9-aad7-187883c09c3b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c99345d586dbf22146da3672a301720d7bb4946d8d36bafacfce61ea4644fbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:28:15.328457 kubelet[3373]: E1013 00:28:15.328425 3373 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c99345d586dbf22146da3672a301720d7bb4946d8d36bafacfce61ea4644fbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:28:15.328490 kubelet[3373]: E1013 00:28:15.328466 3373 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c99345d586dbf22146da3672a301720d7bb4946d8d36bafacfce61ea4644fbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-844f59d67c-9hbnj" Oct 13 00:28:15.328509 kubelet[3373]: E1013 00:28:15.328478 3373 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c99345d586dbf22146da3672a301720d7bb4946d8d36bafacfce61ea4644fbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-844f59d67c-9hbnj" Oct 13 00:28:15.328531 kubelet[3373]: E1013 00:28:15.328520 3373 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-844f59d67c-9hbnj_calico-system(8a7eef07-5d01-44a9-aad7-187883c09c3b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-844f59d67c-9hbnj_calico-system(8a7eef07-5d01-44a9-aad7-187883c09c3b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c99345d586dbf22146da3672a301720d7bb4946d8d36bafacfce61ea4644fbb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-844f59d67c-9hbnj" podUID="8a7eef07-5d01-44a9-aad7-187883c09c3b" Oct 13 00:28:15.502025 containerd[1881]: time="2025-10-13T00:28:15.501379790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4d7db98f-gx5fl,Uid:dd6494fc-6077-4e51-941e-0d0f0b6a8344,Namespace:calico-apiserver,Attempt:0,}" Oct 13 00:28:15.540531 containerd[1881]: time="2025-10-13T00:28:15.540478551Z" level=error msg="Failed to destroy network for sandbox \"40caf02a2ef11a461ddc0a1e36368897d5bad67b3edd9d2d140cd8f38ad94331\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:28:15.544595 containerd[1881]: time="2025-10-13T00:28:15.544542512Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4d7db98f-gx5fl,Uid:dd6494fc-6077-4e51-941e-0d0f0b6a8344,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"40caf02a2ef11a461ddc0a1e36368897d5bad67b3edd9d2d140cd8f38ad94331\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:28:15.544840 kubelet[3373]: E1013 00:28:15.544760 3373 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40caf02a2ef11a461ddc0a1e36368897d5bad67b3edd9d2d140cd8f38ad94331\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:28:15.544840 kubelet[3373]: E1013 00:28:15.544817 3373 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40caf02a2ef11a461ddc0a1e36368897d5bad67b3edd9d2d140cd8f38ad94331\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d4d7db98f-gx5fl" Oct 13 00:28:15.544840 kubelet[3373]: E1013 00:28:15.544832 3373 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40caf02a2ef11a461ddc0a1e36368897d5bad67b3edd9d2d140cd8f38ad94331\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d4d7db98f-gx5fl" Oct 13 00:28:15.544930 kubelet[3373]: E1013 00:28:15.544868 3373 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d4d7db98f-gx5fl_calico-apiserver(dd6494fc-6077-4e51-941e-0d0f0b6a8344)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d4d7db98f-gx5fl_calico-apiserver(dd6494fc-6077-4e51-941e-0d0f0b6a8344)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40caf02a2ef11a461ddc0a1e36368897d5bad67b3edd9d2d140cd8f38ad94331\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d4d7db98f-gx5fl" podUID="dd6494fc-6077-4e51-941e-0d0f0b6a8344" Oct 13 00:28:15.794056 containerd[1881]: time="2025-10-13T00:28:15.793841842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4d7db98f-ctlg8,Uid:d5df1242-3535-480f-83ed-f48fbf6f9e8f,Namespace:calico-apiserver,Attempt:0,}" Oct 13 00:28:15.798480 containerd[1881]: time="2025-10-13T00:28:15.798436269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-745499777d-j7f7t,Uid:6541987e-4c6e-48c0-877c-85d5d35cabc1,Namespace:calico-apiserver,Attempt:0,}" Oct 13 00:28:15.847844 containerd[1881]: time="2025-10-13T00:28:15.847795526Z" level=error msg="Failed to destroy network for sandbox \"732e282f9090cb7e944c00b13246383dd403833202128af995a425f18986091c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:28:15.850908 containerd[1881]: time="2025-10-13T00:28:15.850873397Z" level=error msg="Failed to destroy network for sandbox \"a81cb302045c86c7c8710ae39eace7d68bdf5cfc89f25cc2089dd40eb9501b36\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:28:15.851448 containerd[1881]: time="2025-10-13T00:28:15.851414744Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4d7db98f-ctlg8,Uid:d5df1242-3535-480f-83ed-f48fbf6f9e8f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"732e282f9090cb7e944c00b13246383dd403833202128af995a425f18986091c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:28:15.852023 kubelet[3373]: E1013 00:28:15.851631 3373 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"732e282f9090cb7e944c00b13246383dd403833202128af995a425f18986091c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:28:15.852023 kubelet[3373]: E1013 00:28:15.851710 3373 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"732e282f9090cb7e944c00b13246383dd403833202128af995a425f18986091c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d4d7db98f-ctlg8" Oct 13 00:28:15.852023 kubelet[3373]: E1013 00:28:15.851729 3373 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"732e282f9090cb7e944c00b13246383dd403833202128af995a425f18986091c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d4d7db98f-ctlg8" Oct 13 00:28:15.852141 kubelet[3373]: E1013 00:28:15.851764 3373 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d4d7db98f-ctlg8_calico-apiserver(d5df1242-3535-480f-83ed-f48fbf6f9e8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d4d7db98f-ctlg8_calico-apiserver(d5df1242-3535-480f-83ed-f48fbf6f9e8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"732e282f9090cb7e944c00b13246383dd403833202128af995a425f18986091c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d4d7db98f-ctlg8" podUID="d5df1242-3535-480f-83ed-f48fbf6f9e8f" Oct 13 00:28:15.854257 containerd[1881]: time="2025-10-13T00:28:15.854228974Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-745499777d-j7f7t,Uid:6541987e-4c6e-48c0-877c-85d5d35cabc1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a81cb302045c86c7c8710ae39eace7d68bdf5cfc89f25cc2089dd40eb9501b36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:28:15.854443 kubelet[3373]: E1013 00:28:15.854396 3373 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a81cb302045c86c7c8710ae39eace7d68bdf5cfc89f25cc2089dd40eb9501b36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 00:28:15.854480 kubelet[3373]: E1013 00:28:15.854454 3373 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a81cb302045c86c7c8710ae39eace7d68bdf5cfc89f25cc2089dd40eb9501b36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-745499777d-j7f7t" Oct 13 00:28:15.854480 kubelet[3373]: E1013 00:28:15.854468 3373 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a81cb302045c86c7c8710ae39eace7d68bdf5cfc89f25cc2089dd40eb9501b36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-745499777d-j7f7t" Oct 13 00:28:15.854530 kubelet[3373]: E1013 00:28:15.854503 3373 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-745499777d-j7f7t_calico-apiserver(6541987e-4c6e-48c0-877c-85d5d35cabc1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-745499777d-j7f7t_calico-apiserver(6541987e-4c6e-48c0-877c-85d5d35cabc1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a81cb302045c86c7c8710ae39eace7d68bdf5cfc89f25cc2089dd40eb9501b36\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-745499777d-j7f7t" podUID="6541987e-4c6e-48c0-877c-85d5d35cabc1" Oct 13 00:28:21.099148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2503649817.mount: Deactivated successfully. Oct 13 00:28:21.152600 containerd[1881]: time="2025-10-13T00:28:21.152539862Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:28:21.155122 containerd[1881]: time="2025-10-13T00:28:21.155090705Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Oct 13 00:28:21.157918 containerd[1881]: time="2025-10-13T00:28:21.157894732Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:28:21.163487 containerd[1881]: time="2025-10-13T00:28:21.163452273Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:28:21.164195 containerd[1881]: time="2025-10-13T00:28:21.164169320Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 5.971093706s" Oct 13 00:28:21.164249 containerd[1881]: time="2025-10-13T00:28:21.164197873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Oct 13 00:28:21.178029 containerd[1881]: time="2025-10-13T00:28:21.177994370Z" level=info msg="CreateContainer within sandbox \"de0bd2ee6cc64675c3f504c7774fcccbc8d19a326b69a065fb79abafb24cd5ed\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 13 00:28:21.203241 containerd[1881]: time="2025-10-13T00:28:21.203198231Z" level=info msg="Container 5cce6d6ad6e3b689c9464d26ed717e739885e9507e1b6e83ef1827561f9f51df: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:28:21.222282 containerd[1881]: time="2025-10-13T00:28:21.222238899Z" level=info msg="CreateContainer within sandbox \"de0bd2ee6cc64675c3f504c7774fcccbc8d19a326b69a065fb79abafb24cd5ed\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5cce6d6ad6e3b689c9464d26ed717e739885e9507e1b6e83ef1827561f9f51df\"" Oct 13 00:28:21.223146 containerd[1881]: time="2025-10-13T00:28:21.222889448Z" level=info msg="StartContainer for \"5cce6d6ad6e3b689c9464d26ed717e739885e9507e1b6e83ef1827561f9f51df\"" Oct 13 00:28:21.225425 containerd[1881]: time="2025-10-13T00:28:21.225378161Z" level=info msg="connecting to shim 5cce6d6ad6e3b689c9464d26ed717e739885e9507e1b6e83ef1827561f9f51df" address="unix:///run/containerd/s/0f113b1674d89fcc638c3e98e5fc118396eb382821d5ffd7884521f0fdde2c07" protocol=ttrpc version=3 Oct 13 00:28:21.243081 systemd[1]: Started cri-containerd-5cce6d6ad6e3b689c9464d26ed717e739885e9507e1b6e83ef1827561f9f51df.scope - libcontainer container 5cce6d6ad6e3b689c9464d26ed717e739885e9507e1b6e83ef1827561f9f51df. Oct 13 00:28:21.275127 containerd[1881]: time="2025-10-13T00:28:21.275093315Z" level=info msg="StartContainer for \"5cce6d6ad6e3b689c9464d26ed717e739885e9507e1b6e83ef1827561f9f51df\" returns successfully" Oct 13 00:28:21.719681 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 13 00:28:21.720374 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 13 00:28:21.982937 kubelet[3373]: I1013 00:28:21.982830 3373 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9c5hf\" (UniqueName: \"kubernetes.io/projected/ed6f0101-b75c-4fad-8e70-d80f4b301375-kube-api-access-9c5hf\") pod \"ed6f0101-b75c-4fad-8e70-d80f4b301375\" (UID: \"ed6f0101-b75c-4fad-8e70-d80f4b301375\") " Oct 13 00:28:21.985224 kubelet[3373]: I1013 00:28:21.983521 3373 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ed6f0101-b75c-4fad-8e70-d80f4b301375-whisker-backend-key-pair\") pod \"ed6f0101-b75c-4fad-8e70-d80f4b301375\" (UID: \"ed6f0101-b75c-4fad-8e70-d80f4b301375\") " Oct 13 00:28:21.985224 kubelet[3373]: I1013 00:28:21.983578 3373 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed6f0101-b75c-4fad-8e70-d80f4b301375-whisker-ca-bundle\") pod \"ed6f0101-b75c-4fad-8e70-d80f4b301375\" (UID: \"ed6f0101-b75c-4fad-8e70-d80f4b301375\") " Oct 13 00:28:21.985710 kubelet[3373]: I1013 00:28:21.985665 3373 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed6f0101-b75c-4fad-8e70-d80f4b301375-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ed6f0101-b75c-4fad-8e70-d80f4b301375" (UID: "ed6f0101-b75c-4fad-8e70-d80f4b301375"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 13 00:28:21.985770 kubelet[3373]: I1013 00:28:21.985726 3373 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed6f0101-b75c-4fad-8e70-d80f4b301375-kube-api-access-9c5hf" (OuterVolumeSpecName: "kube-api-access-9c5hf") pod "ed6f0101-b75c-4fad-8e70-d80f4b301375" (UID: "ed6f0101-b75c-4fad-8e70-d80f4b301375"). InnerVolumeSpecName "kube-api-access-9c5hf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 00:28:21.987376 kubelet[3373]: I1013 00:28:21.987342 3373 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed6f0101-b75c-4fad-8e70-d80f4b301375-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ed6f0101-b75c-4fad-8e70-d80f4b301375" (UID: "ed6f0101-b75c-4fad-8e70-d80f4b301375"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 00:28:22.067655 systemd[1]: Removed slice kubepods-besteffort-poded6f0101_b75c_4fad_8e70_d80f4b301375.slice - libcontainer container kubepods-besteffort-poded6f0101_b75c_4fad_8e70_d80f4b301375.slice. Oct 13 00:28:22.086873 kubelet[3373]: I1013 00:28:22.086730 3373 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ed6f0101-b75c-4fad-8e70-d80f4b301375-whisker-backend-key-pair\") on node \"ci-4459.1.0-a-27183f81a1\" DevicePath \"\"" Oct 13 00:28:22.087353 kubelet[3373]: I1013 00:28:22.086961 3373 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed6f0101-b75c-4fad-8e70-d80f4b301375-whisker-ca-bundle\") on node \"ci-4459.1.0-a-27183f81a1\" DevicePath \"\"" Oct 13 00:28:22.087353 kubelet[3373]: I1013 00:28:22.087299 3373 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9c5hf\" (UniqueName: \"kubernetes.io/projected/ed6f0101-b75c-4fad-8e70-d80f4b301375-kube-api-access-9c5hf\") on node \"ci-4459.1.0-a-27183f81a1\" DevicePath \"\"" Oct 13 00:28:22.100431 systemd[1]: var-lib-kubelet-pods-ed6f0101\x2db75c\x2d4fad\x2d8e70\x2dd80f4b301375-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9c5hf.mount: Deactivated successfully. Oct 13 00:28:22.100517 systemd[1]: var-lib-kubelet-pods-ed6f0101\x2db75c\x2d4fad\x2d8e70\x2dd80f4b301375-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 13 00:28:22.226204 kubelet[3373]: I1013 00:28:22.226072 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ckqwb" podStartSLOduration=2.413473062 podStartE2EDuration="30.225913787s" podCreationTimestamp="2025-10-13 00:27:52 +0000 UTC" firstStartedPulling="2025-10-13 00:27:53.352410041 +0000 UTC m=+25.413137367" lastFinishedPulling="2025-10-13 00:28:21.164850766 +0000 UTC m=+53.225578092" observedRunningTime="2025-10-13 00:28:22.22539033 +0000 UTC m=+54.286117664" watchObservedRunningTime="2025-10-13 00:28:22.225913787 +0000 UTC m=+54.286783038" Oct 13 00:28:22.295923 containerd[1881]: time="2025-10-13T00:28:22.295836056Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5cce6d6ad6e3b689c9464d26ed717e739885e9507e1b6e83ef1827561f9f51df\" id:\"8758180fcc4a9c219cb75403f87275ab3e6c67d3fd4307544a236c653c123326\" pid:4425 exit_status:1 exited_at:{seconds:1760315302 nanos:295357376}" Oct 13 00:28:22.305818 systemd[1]: Created slice kubepods-besteffort-pod4bc27ed7_1813_4f2c_83c4_46cf59db0730.slice - libcontainer container kubepods-besteffort-pod4bc27ed7_1813_4f2c_83c4_46cf59db0730.slice. Oct 13 00:28:22.390113 kubelet[3373]: I1013 00:28:22.390031 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bc27ed7-1813-4f2c-83c4-46cf59db0730-whisker-ca-bundle\") pod \"whisker-59467dcdf8-dvgb6\" (UID: \"4bc27ed7-1813-4f2c-83c4-46cf59db0730\") " pod="calico-system/whisker-59467dcdf8-dvgb6" Oct 13 00:28:22.390113 kubelet[3373]: I1013 00:28:22.390139 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb2n6\" (UniqueName: \"kubernetes.io/projected/4bc27ed7-1813-4f2c-83c4-46cf59db0730-kube-api-access-vb2n6\") pod \"whisker-59467dcdf8-dvgb6\" (UID: \"4bc27ed7-1813-4f2c-83c4-46cf59db0730\") " pod="calico-system/whisker-59467dcdf8-dvgb6" Oct 13 00:28:22.390113 kubelet[3373]: I1013 00:28:22.390174 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4bc27ed7-1813-4f2c-83c4-46cf59db0730-whisker-backend-key-pair\") pod \"whisker-59467dcdf8-dvgb6\" (UID: \"4bc27ed7-1813-4f2c-83c4-46cf59db0730\") " pod="calico-system/whisker-59467dcdf8-dvgb6" Oct 13 00:28:22.610601 containerd[1881]: time="2025-10-13T00:28:22.610461922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59467dcdf8-dvgb6,Uid:4bc27ed7-1813-4f2c-83c4-46cf59db0730,Namespace:calico-system,Attempt:0,}" Oct 13 00:28:22.730078 systemd-networkd[1476]: cali1940bb2684f: Link UP Oct 13 00:28:22.732266 systemd-networkd[1476]: cali1940bb2684f: Gained carrier Oct 13 00:28:22.756774 containerd[1881]: 2025-10-13 00:28:22.635 [INFO][4439] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 00:28:22.756774 containerd[1881]: 2025-10-13 00:28:22.657 [INFO][4439] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--a--27183f81a1-k8s-whisker--59467dcdf8--dvgb6-eth0 whisker-59467dcdf8- calico-system 4bc27ed7-1813-4f2c-83c4-46cf59db0730 926 0 2025-10-13 00:28:22 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:59467dcdf8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.1.0-a-27183f81a1 whisker-59467dcdf8-dvgb6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1940bb2684f [] [] }} ContainerID="fd57f7ad563fa56f4555eb6ea846ac515b995cb1f28010276c293de8b4aa0ef9" Namespace="calico-system" Pod="whisker-59467dcdf8-dvgb6" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-whisker--59467dcdf8--dvgb6-" Oct 13 00:28:22.756774 containerd[1881]: 2025-10-13 00:28:22.657 [INFO][4439] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fd57f7ad563fa56f4555eb6ea846ac515b995cb1f28010276c293de8b4aa0ef9" Namespace="calico-system" Pod="whisker-59467dcdf8-dvgb6" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-whisker--59467dcdf8--dvgb6-eth0" Oct 13 00:28:22.756774 containerd[1881]: 2025-10-13 00:28:22.675 [INFO][4451] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd57f7ad563fa56f4555eb6ea846ac515b995cb1f28010276c293de8b4aa0ef9" HandleID="k8s-pod-network.fd57f7ad563fa56f4555eb6ea846ac515b995cb1f28010276c293de8b4aa0ef9" Workload="ci--4459.1.0--a--27183f81a1-k8s-whisker--59467dcdf8--dvgb6-eth0" Oct 13 00:28:22.757110 containerd[1881]: 2025-10-13 00:28:22.675 [INFO][4451] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fd57f7ad563fa56f4555eb6ea846ac515b995cb1f28010276c293de8b4aa0ef9" HandleID="k8s-pod-network.fd57f7ad563fa56f4555eb6ea846ac515b995cb1f28010276c293de8b4aa0ef9" Workload="ci--4459.1.0--a--27183f81a1-k8s-whisker--59467dcdf8--dvgb6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b070), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-a-27183f81a1", "pod":"whisker-59467dcdf8-dvgb6", "timestamp":"2025-10-13 00:28:22.675005615 +0000 UTC"}, Hostname:"ci-4459.1.0-a-27183f81a1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:28:22.757110 containerd[1881]: 2025-10-13 00:28:22.675 [INFO][4451] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:28:22.757110 containerd[1881]: 2025-10-13 00:28:22.675 [INFO][4451] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:28:22.757110 containerd[1881]: 2025-10-13 00:28:22.675 [INFO][4451] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-a-27183f81a1' Oct 13 00:28:22.757110 containerd[1881]: 2025-10-13 00:28:22.680 [INFO][4451] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fd57f7ad563fa56f4555eb6ea846ac515b995cb1f28010276c293de8b4aa0ef9" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:22.757110 containerd[1881]: 2025-10-13 00:28:22.686 [INFO][4451] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:22.757110 containerd[1881]: 2025-10-13 00:28:22.689 [INFO][4451] ipam/ipam.go 511: Trying affinity for 192.168.71.128/26 host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:22.757110 containerd[1881]: 2025-10-13 00:28:22.690 [INFO][4451] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.128/26 host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:22.757110 containerd[1881]: 2025-10-13 00:28:22.692 [INFO][4451] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:22.757399 containerd[1881]: 2025-10-13 00:28:22.692 [INFO][4451] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.fd57f7ad563fa56f4555eb6ea846ac515b995cb1f28010276c293de8b4aa0ef9" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:22.757399 containerd[1881]: 2025-10-13 00:28:22.693 [INFO][4451] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fd57f7ad563fa56f4555eb6ea846ac515b995cb1f28010276c293de8b4aa0ef9 Oct 13 00:28:22.757399 containerd[1881]: 2025-10-13 00:28:22.698 [INFO][4451] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.fd57f7ad563fa56f4555eb6ea846ac515b995cb1f28010276c293de8b4aa0ef9" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:22.757399 containerd[1881]: 2025-10-13 00:28:22.708 [INFO][4451] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.71.129/26] block=192.168.71.128/26 handle="k8s-pod-network.fd57f7ad563fa56f4555eb6ea846ac515b995cb1f28010276c293de8b4aa0ef9" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:22.757399 containerd[1881]: 2025-10-13 00:28:22.708 [INFO][4451] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.129/26] handle="k8s-pod-network.fd57f7ad563fa56f4555eb6ea846ac515b995cb1f28010276c293de8b4aa0ef9" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:22.757399 containerd[1881]: 2025-10-13 00:28:22.708 [INFO][4451] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:28:22.757399 containerd[1881]: 2025-10-13 00:28:22.708 [INFO][4451] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.71.129/26] IPv6=[] ContainerID="fd57f7ad563fa56f4555eb6ea846ac515b995cb1f28010276c293de8b4aa0ef9" HandleID="k8s-pod-network.fd57f7ad563fa56f4555eb6ea846ac515b995cb1f28010276c293de8b4aa0ef9" Workload="ci--4459.1.0--a--27183f81a1-k8s-whisker--59467dcdf8--dvgb6-eth0" Oct 13 00:28:22.757537 containerd[1881]: 2025-10-13 00:28:22.711 [INFO][4439] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fd57f7ad563fa56f4555eb6ea846ac515b995cb1f28010276c293de8b4aa0ef9" Namespace="calico-system" Pod="whisker-59467dcdf8-dvgb6" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-whisker--59467dcdf8--dvgb6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--27183f81a1-k8s-whisker--59467dcdf8--dvgb6-eth0", GenerateName:"whisker-59467dcdf8-", Namespace:"calico-system", SelfLink:"", UID:"4bc27ed7-1813-4f2c-83c4-46cf59db0730", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 28, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59467dcdf8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-27183f81a1", ContainerID:"", Pod:"whisker-59467dcdf8-dvgb6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.71.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1940bb2684f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:28:22.757537 containerd[1881]: 2025-10-13 00:28:22.711 [INFO][4439] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.129/32] ContainerID="fd57f7ad563fa56f4555eb6ea846ac515b995cb1f28010276c293de8b4aa0ef9" Namespace="calico-system" Pod="whisker-59467dcdf8-dvgb6" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-whisker--59467dcdf8--dvgb6-eth0" Oct 13 00:28:22.757651 containerd[1881]: 2025-10-13 00:28:22.711 [INFO][4439] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1940bb2684f ContainerID="fd57f7ad563fa56f4555eb6ea846ac515b995cb1f28010276c293de8b4aa0ef9" Namespace="calico-system" Pod="whisker-59467dcdf8-dvgb6" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-whisker--59467dcdf8--dvgb6-eth0" Oct 13 00:28:22.757651 containerd[1881]: 2025-10-13 00:28:22.731 [INFO][4439] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd57f7ad563fa56f4555eb6ea846ac515b995cb1f28010276c293de8b4aa0ef9" Namespace="calico-system" Pod="whisker-59467dcdf8-dvgb6" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-whisker--59467dcdf8--dvgb6-eth0" Oct 13 00:28:22.757718 containerd[1881]: 2025-10-13 00:28:22.733 [INFO][4439] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fd57f7ad563fa56f4555eb6ea846ac515b995cb1f28010276c293de8b4aa0ef9" Namespace="calico-system" Pod="whisker-59467dcdf8-dvgb6" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-whisker--59467dcdf8--dvgb6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--27183f81a1-k8s-whisker--59467dcdf8--dvgb6-eth0", GenerateName:"whisker-59467dcdf8-", Namespace:"calico-system", SelfLink:"", UID:"4bc27ed7-1813-4f2c-83c4-46cf59db0730", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 28, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59467dcdf8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-27183f81a1", ContainerID:"fd57f7ad563fa56f4555eb6ea846ac515b995cb1f28010276c293de8b4aa0ef9", Pod:"whisker-59467dcdf8-dvgb6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.71.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1940bb2684f", MAC:"1a:09:09:96:44:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:28:22.757806 containerd[1881]: 2025-10-13 00:28:22.752 [INFO][4439] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fd57f7ad563fa56f4555eb6ea846ac515b995cb1f28010276c293de8b4aa0ef9" Namespace="calico-system" Pod="whisker-59467dcdf8-dvgb6" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-whisker--59467dcdf8--dvgb6-eth0" Oct 13 00:28:22.803113 containerd[1881]: time="2025-10-13T00:28:22.803072928Z" level=info msg="connecting to shim fd57f7ad563fa56f4555eb6ea846ac515b995cb1f28010276c293de8b4aa0ef9" address="unix:///run/containerd/s/2d420bc2013dbd60421a83538e8dcecd7eaa377b073eaa77ea495488580694bd" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:28:22.829078 systemd[1]: Started cri-containerd-fd57f7ad563fa56f4555eb6ea846ac515b995cb1f28010276c293de8b4aa0ef9.scope - libcontainer container fd57f7ad563fa56f4555eb6ea846ac515b995cb1f28010276c293de8b4aa0ef9. Oct 13 00:28:22.857866 containerd[1881]: time="2025-10-13T00:28:22.857833631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59467dcdf8-dvgb6,Uid:4bc27ed7-1813-4f2c-83c4-46cf59db0730,Namespace:calico-system,Attempt:0,} returns sandbox id \"fd57f7ad563fa56f4555eb6ea846ac515b995cb1f28010276c293de8b4aa0ef9\"" Oct 13 00:28:22.860338 containerd[1881]: time="2025-10-13T00:28:22.860310383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Oct 13 00:28:23.423595 containerd[1881]: time="2025-10-13T00:28:23.423531726Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5cce6d6ad6e3b689c9464d26ed717e739885e9507e1b6e83ef1827561f9f51df\" id:\"429b19a777acdf8096b58bf778ddd8ce1e9cf43427321b0d62b6182202d192e2\" pid:4572 exit_status:1 exited_at:{seconds:1760315303 nanos:422617320}" Oct 13 00:28:24.062825 kubelet[3373]: I1013 00:28:24.062785 3373 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed6f0101-b75c-4fad-8e70-d80f4b301375" path="/var/lib/kubelet/pods/ed6f0101-b75c-4fad-8e70-d80f4b301375/volumes" Oct 13 00:28:24.104841 systemd-networkd[1476]: vxlan.calico: Link UP Oct 13 00:28:24.104849 systemd-networkd[1476]: vxlan.calico: Gained carrier Oct 13 00:28:24.571233 containerd[1881]: time="2025-10-13T00:28:24.571181998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:28:24.574834 containerd[1881]: time="2025-10-13T00:28:24.574688512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Oct 13 00:28:24.579410 containerd[1881]: time="2025-10-13T00:28:24.579111448Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:28:24.584650 containerd[1881]: time="2025-10-13T00:28:24.584616300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:28:24.584982 containerd[1881]: time="2025-10-13T00:28:24.584956311Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.72460139s" Oct 13 00:28:24.584982 containerd[1881]: time="2025-10-13T00:28:24.584984328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Oct 13 00:28:24.588474 containerd[1881]: time="2025-10-13T00:28:24.588445072Z" level=info msg="CreateContainer within sandbox \"fd57f7ad563fa56f4555eb6ea846ac515b995cb1f28010276c293de8b4aa0ef9\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Oct 13 00:28:24.618463 containerd[1881]: time="2025-10-13T00:28:24.618121574Z" level=info msg="Container f877f88aa2011d4d36981839206dafad3be93b19d3db2fbb6344f66dfff5dc24: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:28:24.620687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount193814623.mount: Deactivated successfully. Oct 13 00:28:24.641235 containerd[1881]: time="2025-10-13T00:28:24.641121803Z" level=info msg="CreateContainer within sandbox \"fd57f7ad563fa56f4555eb6ea846ac515b995cb1f28010276c293de8b4aa0ef9\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"f877f88aa2011d4d36981839206dafad3be93b19d3db2fbb6344f66dfff5dc24\"" Oct 13 00:28:24.641751 containerd[1881]: time="2025-10-13T00:28:24.641692502Z" level=info msg="StartContainer for \"f877f88aa2011d4d36981839206dafad3be93b19d3db2fbb6344f66dfff5dc24\"" Oct 13 00:28:24.642740 containerd[1881]: time="2025-10-13T00:28:24.642678838Z" level=info msg="connecting to shim f877f88aa2011d4d36981839206dafad3be93b19d3db2fbb6344f66dfff5dc24" address="unix:///run/containerd/s/2d420bc2013dbd60421a83538e8dcecd7eaa377b073eaa77ea495488580694bd" protocol=ttrpc version=3 Oct 13 00:28:24.662111 systemd[1]: Started cri-containerd-f877f88aa2011d4d36981839206dafad3be93b19d3db2fbb6344f66dfff5dc24.scope - libcontainer container f877f88aa2011d4d36981839206dafad3be93b19d3db2fbb6344f66dfff5dc24. Oct 13 00:28:24.699476 containerd[1881]: time="2025-10-13T00:28:24.699420629Z" level=info msg="StartContainer for \"f877f88aa2011d4d36981839206dafad3be93b19d3db2fbb6344f66dfff5dc24\" returns successfully" Oct 13 00:28:24.701699 containerd[1881]: time="2025-10-13T00:28:24.701635829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Oct 13 00:28:24.734072 systemd-networkd[1476]: cali1940bb2684f: Gained IPv6LL Oct 13 00:28:25.310137 systemd-networkd[1476]: vxlan.calico: Gained IPv6LL Oct 13 00:28:26.062392 containerd[1881]: time="2025-10-13T00:28:26.062345974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-j7cfn,Uid:87870071-4bbc-43b3-a4d4-ede5124d2669,Namespace:calico-system,Attempt:0,}" Oct 13 00:28:26.156059 systemd-networkd[1476]: cali1520ae78e5a: Link UP Oct 13 00:28:26.156214 systemd-networkd[1476]: cali1520ae78e5a: Gained carrier Oct 13 00:28:26.198013 containerd[1881]: 2025-10-13 00:28:26.093 [INFO][4761] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--a--27183f81a1-k8s-goldmane--54d579b49d--j7cfn-eth0 goldmane-54d579b49d- calico-system 87870071-4bbc-43b3-a4d4-ede5124d2669 855 0 2025-10-13 00:27:52 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.1.0-a-27183f81a1 goldmane-54d579b49d-j7cfn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1520ae78e5a [] [] }} ContainerID="daf18e1aedfcc43ec85203315db319e174c78feb36ef65ccb7ced96d962f5772" Namespace="calico-system" Pod="goldmane-54d579b49d-j7cfn" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-goldmane--54d579b49d--j7cfn-" Oct 13 00:28:26.198013 containerd[1881]: 2025-10-13 00:28:26.093 [INFO][4761] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="daf18e1aedfcc43ec85203315db319e174c78feb36ef65ccb7ced96d962f5772" Namespace="calico-system" Pod="goldmane-54d579b49d-j7cfn" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-goldmane--54d579b49d--j7cfn-eth0" Oct 13 00:28:26.198013 containerd[1881]: 2025-10-13 00:28:26.120 [INFO][4772] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="daf18e1aedfcc43ec85203315db319e174c78feb36ef65ccb7ced96d962f5772" HandleID="k8s-pod-network.daf18e1aedfcc43ec85203315db319e174c78feb36ef65ccb7ced96d962f5772" Workload="ci--4459.1.0--a--27183f81a1-k8s-goldmane--54d579b49d--j7cfn-eth0" Oct 13 00:28:26.198383 containerd[1881]: 2025-10-13 00:28:26.120 [INFO][4772] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="daf18e1aedfcc43ec85203315db319e174c78feb36ef65ccb7ced96d962f5772" HandleID="k8s-pod-network.daf18e1aedfcc43ec85203315db319e174c78feb36ef65ccb7ced96d962f5772" Workload="ci--4459.1.0--a--27183f81a1-k8s-goldmane--54d579b49d--j7cfn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb0a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-a-27183f81a1", "pod":"goldmane-54d579b49d-j7cfn", "timestamp":"2025-10-13 00:28:26.120546204 +0000 UTC"}, Hostname:"ci-4459.1.0-a-27183f81a1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:28:26.198383 containerd[1881]: 2025-10-13 00:28:26.120 [INFO][4772] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:28:26.198383 containerd[1881]: 2025-10-13 00:28:26.120 [INFO][4772] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:28:26.198383 containerd[1881]: 2025-10-13 00:28:26.120 [INFO][4772] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-a-27183f81a1' Oct 13 00:28:26.198383 containerd[1881]: 2025-10-13 00:28:26.125 [INFO][4772] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.daf18e1aedfcc43ec85203315db319e174c78feb36ef65ccb7ced96d962f5772" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:26.198383 containerd[1881]: 2025-10-13 00:28:26.129 [INFO][4772] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:26.198383 containerd[1881]: 2025-10-13 00:28:26.132 [INFO][4772] ipam/ipam.go 511: Trying affinity for 192.168.71.128/26 host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:26.198383 containerd[1881]: 2025-10-13 00:28:26.133 [INFO][4772] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.128/26 host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:26.198383 containerd[1881]: 2025-10-13 00:28:26.134 [INFO][4772] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:26.198533 containerd[1881]: 2025-10-13 00:28:26.135 [INFO][4772] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.daf18e1aedfcc43ec85203315db319e174c78feb36ef65ccb7ced96d962f5772" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:26.198533 containerd[1881]: 2025-10-13 00:28:26.136 [INFO][4772] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.daf18e1aedfcc43ec85203315db319e174c78feb36ef65ccb7ced96d962f5772 Oct 13 00:28:26.198533 containerd[1881]: 2025-10-13 00:28:26.140 [INFO][4772] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.daf18e1aedfcc43ec85203315db319e174c78feb36ef65ccb7ced96d962f5772" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:26.198533 containerd[1881]: 2025-10-13 00:28:26.151 [INFO][4772] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.71.130/26] block=192.168.71.128/26 handle="k8s-pod-network.daf18e1aedfcc43ec85203315db319e174c78feb36ef65ccb7ced96d962f5772" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:26.198533 containerd[1881]: 2025-10-13 00:28:26.151 [INFO][4772] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.130/26] handle="k8s-pod-network.daf18e1aedfcc43ec85203315db319e174c78feb36ef65ccb7ced96d962f5772" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:26.198533 containerd[1881]: 2025-10-13 00:28:26.151 [INFO][4772] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:28:26.198533 containerd[1881]: 2025-10-13 00:28:26.151 [INFO][4772] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.71.130/26] IPv6=[] ContainerID="daf18e1aedfcc43ec85203315db319e174c78feb36ef65ccb7ced96d962f5772" HandleID="k8s-pod-network.daf18e1aedfcc43ec85203315db319e174c78feb36ef65ccb7ced96d962f5772" Workload="ci--4459.1.0--a--27183f81a1-k8s-goldmane--54d579b49d--j7cfn-eth0" Oct 13 00:28:26.198624 containerd[1881]: 2025-10-13 00:28:26.153 [INFO][4761] cni-plugin/k8s.go 418: Populated endpoint ContainerID="daf18e1aedfcc43ec85203315db319e174c78feb36ef65ccb7ced96d962f5772" Namespace="calico-system" Pod="goldmane-54d579b49d-j7cfn" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-goldmane--54d579b49d--j7cfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--27183f81a1-k8s-goldmane--54d579b49d--j7cfn-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"87870071-4bbc-43b3-a4d4-ede5124d2669", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 27, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-27183f81a1", ContainerID:"", Pod:"goldmane-54d579b49d-j7cfn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.71.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1520ae78e5a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:28:26.198658 containerd[1881]: 2025-10-13 00:28:26.153 [INFO][4761] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.130/32] ContainerID="daf18e1aedfcc43ec85203315db319e174c78feb36ef65ccb7ced96d962f5772" Namespace="calico-system" Pod="goldmane-54d579b49d-j7cfn" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-goldmane--54d579b49d--j7cfn-eth0" Oct 13 00:28:26.198658 containerd[1881]: 2025-10-13 00:28:26.153 [INFO][4761] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1520ae78e5a ContainerID="daf18e1aedfcc43ec85203315db319e174c78feb36ef65ccb7ced96d962f5772" Namespace="calico-system" Pod="goldmane-54d579b49d-j7cfn" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-goldmane--54d579b49d--j7cfn-eth0" Oct 13 00:28:26.198658 containerd[1881]: 2025-10-13 00:28:26.155 [INFO][4761] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="daf18e1aedfcc43ec85203315db319e174c78feb36ef65ccb7ced96d962f5772" Namespace="calico-system" Pod="goldmane-54d579b49d-j7cfn" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-goldmane--54d579b49d--j7cfn-eth0" Oct 13 00:28:26.198701 containerd[1881]: 2025-10-13 00:28:26.156 [INFO][4761] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="daf18e1aedfcc43ec85203315db319e174c78feb36ef65ccb7ced96d962f5772" Namespace="calico-system" Pod="goldmane-54d579b49d-j7cfn" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-goldmane--54d579b49d--j7cfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--27183f81a1-k8s-goldmane--54d579b49d--j7cfn-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"87870071-4bbc-43b3-a4d4-ede5124d2669", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 27, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-27183f81a1", ContainerID:"daf18e1aedfcc43ec85203315db319e174c78feb36ef65ccb7ced96d962f5772", Pod:"goldmane-54d579b49d-j7cfn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.71.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1520ae78e5a", MAC:"0e:f7:86:ad:d1:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:28:26.198733 containerd[1881]: 2025-10-13 00:28:26.195 [INFO][4761] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="daf18e1aedfcc43ec85203315db319e174c78feb36ef65ccb7ced96d962f5772" Namespace="calico-system" Pod="goldmane-54d579b49d-j7cfn" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-goldmane--54d579b49d--j7cfn-eth0" Oct 13 00:28:26.244651 containerd[1881]: time="2025-10-13T00:28:26.244609707Z" level=info msg="connecting to shim daf18e1aedfcc43ec85203315db319e174c78feb36ef65ccb7ced96d962f5772" address="unix:///run/containerd/s/9f78f4c315eacf7fe15e183297c2e06d0edf0d16d82dd15f2390a4798026f7bd" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:28:26.275124 systemd[1]: Started cri-containerd-daf18e1aedfcc43ec85203315db319e174c78feb36ef65ccb7ced96d962f5772.scope - libcontainer container daf18e1aedfcc43ec85203315db319e174c78feb36ef65ccb7ced96d962f5772. Oct 13 00:28:26.312978 containerd[1881]: time="2025-10-13T00:28:26.312719748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-j7cfn,Uid:87870071-4bbc-43b3-a4d4-ede5124d2669,Namespace:calico-system,Attempt:0,} returns sandbox id \"daf18e1aedfcc43ec85203315db319e174c78feb36ef65ccb7ced96d962f5772\"" Oct 13 00:28:26.840525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3983207866.mount: Deactivated successfully. Oct 13 00:28:27.265746 containerd[1881]: time="2025-10-13T00:28:27.265624851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:28:27.268613 containerd[1881]: time="2025-10-13T00:28:27.268458875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Oct 13 00:28:27.272897 containerd[1881]: time="2025-10-13T00:28:27.272869495Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:28:27.276876 containerd[1881]: time="2025-10-13T00:28:27.276466335Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:28:27.276876 containerd[1881]: time="2025-10-13T00:28:27.276770433Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 2.575105851s" Oct 13 00:28:27.276876 containerd[1881]: time="2025-10-13T00:28:27.276796098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Oct 13 00:28:27.278434 containerd[1881]: time="2025-10-13T00:28:27.278410976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Oct 13 00:28:27.279601 containerd[1881]: time="2025-10-13T00:28:27.279575152Z" level=info msg="CreateContainer within sandbox \"fd57f7ad563fa56f4555eb6ea846ac515b995cb1f28010276c293de8b4aa0ef9\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Oct 13 00:28:27.303411 containerd[1881]: time="2025-10-13T00:28:27.303375230Z" level=info msg="Container 0505406353f0a865a753d199e79ad4bf69edd0fbf9b1caae917b4399bc0402a1: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:28:27.323282 containerd[1881]: time="2025-10-13T00:28:27.323250961Z" level=info msg="CreateContainer within sandbox \"fd57f7ad563fa56f4555eb6ea846ac515b995cb1f28010276c293de8b4aa0ef9\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"0505406353f0a865a753d199e79ad4bf69edd0fbf9b1caae917b4399bc0402a1\"" Oct 13 00:28:27.324158 containerd[1881]: time="2025-10-13T00:28:27.324132263Z" level=info msg="StartContainer for \"0505406353f0a865a753d199e79ad4bf69edd0fbf9b1caae917b4399bc0402a1\"" Oct 13 00:28:27.325190 containerd[1881]: time="2025-10-13T00:28:27.325119640Z" level=info msg="connecting to shim 0505406353f0a865a753d199e79ad4bf69edd0fbf9b1caae917b4399bc0402a1" address="unix:///run/containerd/s/2d420bc2013dbd60421a83538e8dcecd7eaa377b073eaa77ea495488580694bd" protocol=ttrpc version=3 Oct 13 00:28:27.351075 systemd[1]: Started cri-containerd-0505406353f0a865a753d199e79ad4bf69edd0fbf9b1caae917b4399bc0402a1.scope - libcontainer container 0505406353f0a865a753d199e79ad4bf69edd0fbf9b1caae917b4399bc0402a1. Oct 13 00:28:27.383025 containerd[1881]: time="2025-10-13T00:28:27.382983061Z" level=info msg="StartContainer for \"0505406353f0a865a753d199e79ad4bf69edd0fbf9b1caae917b4399bc0402a1\" returns successfully" Oct 13 00:28:27.486197 systemd-networkd[1476]: cali1520ae78e5a: Gained IPv6LL Oct 13 00:28:28.061076 containerd[1881]: time="2025-10-13T00:28:28.060999989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b5nsr,Uid:4ed0fdc7-31ce-42c5-b3c9-46bf732b034a,Namespace:calico-system,Attempt:0,}" Oct 13 00:28:28.061349 containerd[1881]: time="2025-10-13T00:28:28.061328208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mx5qd,Uid:7ce906df-24f0-40c1-9e22-eb83d0c34f2f,Namespace:kube-system,Attempt:0,}" Oct 13 00:28:28.061550 containerd[1881]: time="2025-10-13T00:28:28.061415851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-844f59d67c-9hbnj,Uid:8a7eef07-5d01-44a9-aad7-187883c09c3b,Namespace:calico-system,Attempt:0,}" Oct 13 00:28:28.236933 kubelet[3373]: I1013 00:28:28.236874 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-59467dcdf8-dvgb6" podStartSLOduration=1.818407463 podStartE2EDuration="6.23685685s" podCreationTimestamp="2025-10-13 00:28:22 +0000 UTC" firstStartedPulling="2025-10-13 00:28:22.859243333 +0000 UTC m=+54.919970659" lastFinishedPulling="2025-10-13 00:28:27.27769272 +0000 UTC m=+59.338420046" observedRunningTime="2025-10-13 00:28:28.23579111 +0000 UTC m=+60.296518436" watchObservedRunningTime="2025-10-13 00:28:28.23685685 +0000 UTC m=+60.297584184" Oct 13 00:28:28.257039 systemd-networkd[1476]: calid407bfb6a14: Link UP Oct 13 00:28:28.257210 systemd-networkd[1476]: calid407bfb6a14: Gained carrier Oct 13 00:28:28.284972 containerd[1881]: 2025-10-13 00:28:28.134 [INFO][4879] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--a--27183f81a1-k8s-csi--node--driver--b5nsr-eth0 csi-node-driver- calico-system 4ed0fdc7-31ce-42c5-b3c9-46bf732b034a 688 0 2025-10-13 00:27:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.1.0-a-27183f81a1 csi-node-driver-b5nsr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid407bfb6a14 [] [] }} ContainerID="0d980e1c99d405f1c164192a5a588ae90cbe01b4004ef6601252fc948b0540da" Namespace="calico-system" Pod="csi-node-driver-b5nsr" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-csi--node--driver--b5nsr-" Oct 13 00:28:28.284972 containerd[1881]: 2025-10-13 00:28:28.135 [INFO][4879] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0d980e1c99d405f1c164192a5a588ae90cbe01b4004ef6601252fc948b0540da" Namespace="calico-system" Pod="csi-node-driver-b5nsr" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-csi--node--driver--b5nsr-eth0" Oct 13 00:28:28.284972 containerd[1881]: 2025-10-13 00:28:28.165 [INFO][4918] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d980e1c99d405f1c164192a5a588ae90cbe01b4004ef6601252fc948b0540da" HandleID="k8s-pod-network.0d980e1c99d405f1c164192a5a588ae90cbe01b4004ef6601252fc948b0540da" Workload="ci--4459.1.0--a--27183f81a1-k8s-csi--node--driver--b5nsr-eth0" Oct 13 00:28:28.285509 containerd[1881]: 2025-10-13 00:28:28.165 [INFO][4918] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0d980e1c99d405f1c164192a5a588ae90cbe01b4004ef6601252fc948b0540da" HandleID="k8s-pod-network.0d980e1c99d405f1c164192a5a588ae90cbe01b4004ef6601252fc948b0540da" Workload="ci--4459.1.0--a--27183f81a1-k8s-csi--node--driver--b5nsr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3600), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-a-27183f81a1", "pod":"csi-node-driver-b5nsr", "timestamp":"2025-10-13 00:28:28.165067489 +0000 UTC"}, Hostname:"ci-4459.1.0-a-27183f81a1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:28:28.285509 containerd[1881]: 2025-10-13 00:28:28.165 [INFO][4918] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:28:28.285509 containerd[1881]: 2025-10-13 00:28:28.165 [INFO][4918] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:28:28.285509 containerd[1881]: 2025-10-13 00:28:28.165 [INFO][4918] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-a-27183f81a1' Oct 13 00:28:28.285509 containerd[1881]: 2025-10-13 00:28:28.174 [INFO][4918] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0d980e1c99d405f1c164192a5a588ae90cbe01b4004ef6601252fc948b0540da" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:28.285509 containerd[1881]: 2025-10-13 00:28:28.179 [INFO][4918] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:28.285509 containerd[1881]: 2025-10-13 00:28:28.182 [INFO][4918] ipam/ipam.go 511: Trying affinity for 192.168.71.128/26 host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:28.285509 containerd[1881]: 2025-10-13 00:28:28.184 [INFO][4918] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.128/26 host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:28.285509 containerd[1881]: 2025-10-13 00:28:28.185 [INFO][4918] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:28.285653 containerd[1881]: 2025-10-13 00:28:28.185 [INFO][4918] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.0d980e1c99d405f1c164192a5a588ae90cbe01b4004ef6601252fc948b0540da" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:28.285653 containerd[1881]: 2025-10-13 00:28:28.186 [INFO][4918] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0d980e1c99d405f1c164192a5a588ae90cbe01b4004ef6601252fc948b0540da Oct 13 00:28:28.285653 containerd[1881]: 2025-10-13 00:28:28.191 [INFO][4918] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.0d980e1c99d405f1c164192a5a588ae90cbe01b4004ef6601252fc948b0540da" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:28.285653 containerd[1881]: 2025-10-13 00:28:28.200 [INFO][4918] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.71.131/26] block=192.168.71.128/26 handle="k8s-pod-network.0d980e1c99d405f1c164192a5a588ae90cbe01b4004ef6601252fc948b0540da" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:28.285653 containerd[1881]: 2025-10-13 00:28:28.200 [INFO][4918] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.131/26] handle="k8s-pod-network.0d980e1c99d405f1c164192a5a588ae90cbe01b4004ef6601252fc948b0540da" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:28.285653 containerd[1881]: 2025-10-13 00:28:28.200 [INFO][4918] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:28:28.285653 containerd[1881]: 2025-10-13 00:28:28.200 [INFO][4918] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.71.131/26] IPv6=[] ContainerID="0d980e1c99d405f1c164192a5a588ae90cbe01b4004ef6601252fc948b0540da" HandleID="k8s-pod-network.0d980e1c99d405f1c164192a5a588ae90cbe01b4004ef6601252fc948b0540da" Workload="ci--4459.1.0--a--27183f81a1-k8s-csi--node--driver--b5nsr-eth0" Oct 13 00:28:28.285747 containerd[1881]: 2025-10-13 00:28:28.203 [INFO][4879] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0d980e1c99d405f1c164192a5a588ae90cbe01b4004ef6601252fc948b0540da" Namespace="calico-system" Pod="csi-node-driver-b5nsr" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-csi--node--driver--b5nsr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--27183f81a1-k8s-csi--node--driver--b5nsr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4ed0fdc7-31ce-42c5-b3c9-46bf732b034a", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 27, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-27183f81a1", ContainerID:"", Pod:"csi-node-driver-b5nsr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.71.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid407bfb6a14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:28:28.285783 containerd[1881]: 2025-10-13 00:28:28.251 [INFO][4879] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.131/32] ContainerID="0d980e1c99d405f1c164192a5a588ae90cbe01b4004ef6601252fc948b0540da" Namespace="calico-system" Pod="csi-node-driver-b5nsr" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-csi--node--driver--b5nsr-eth0" Oct 13 00:28:28.285783 containerd[1881]: 2025-10-13 00:28:28.251 [INFO][4879] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid407bfb6a14 ContainerID="0d980e1c99d405f1c164192a5a588ae90cbe01b4004ef6601252fc948b0540da" Namespace="calico-system" Pod="csi-node-driver-b5nsr" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-csi--node--driver--b5nsr-eth0" Oct 13 00:28:28.285783 containerd[1881]: 2025-10-13 00:28:28.257 [INFO][4879] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d980e1c99d405f1c164192a5a588ae90cbe01b4004ef6601252fc948b0540da" Namespace="calico-system" Pod="csi-node-driver-b5nsr" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-csi--node--driver--b5nsr-eth0" Oct 13 00:28:28.285877 containerd[1881]: 2025-10-13 00:28:28.257 [INFO][4879] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0d980e1c99d405f1c164192a5a588ae90cbe01b4004ef6601252fc948b0540da" Namespace="calico-system" Pod="csi-node-driver-b5nsr" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-csi--node--driver--b5nsr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--27183f81a1-k8s-csi--node--driver--b5nsr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4ed0fdc7-31ce-42c5-b3c9-46bf732b034a", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 27, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-27183f81a1", ContainerID:"0d980e1c99d405f1c164192a5a588ae90cbe01b4004ef6601252fc948b0540da", Pod:"csi-node-driver-b5nsr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.71.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid407bfb6a14", MAC:"a2:35:10:11:07:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:28:28.286018 containerd[1881]: 2025-10-13 00:28:28.281 [INFO][4879] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0d980e1c99d405f1c164192a5a588ae90cbe01b4004ef6601252fc948b0540da" Namespace="calico-system" Pod="csi-node-driver-b5nsr" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-csi--node--driver--b5nsr-eth0" Oct 13 00:28:28.324755 systemd-networkd[1476]: cali0957ab9589b: Link UP Oct 13 00:28:28.326061 systemd-networkd[1476]: cali0957ab9589b: Gained carrier Oct 13 00:28:28.350259 containerd[1881]: 2025-10-13 00:28:28.135 [INFO][4889] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--a--27183f81a1-k8s-coredns--668d6bf9bc--mx5qd-eth0 coredns-668d6bf9bc- kube-system 7ce906df-24f0-40c1-9e22-eb83d0c34f2f 845 0 2025-10-13 00:27:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.1.0-a-27183f81a1 coredns-668d6bf9bc-mx5qd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0957ab9589b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6c7a6fccc9ef57a2fbe18ab155dcea2f9bad4d4ffc1a299f4d4668e3a95b0ec4" Namespace="kube-system" Pod="coredns-668d6bf9bc-mx5qd" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-coredns--668d6bf9bc--mx5qd-" Oct 13 00:28:28.350259 containerd[1881]: 2025-10-13 00:28:28.136 [INFO][4889] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6c7a6fccc9ef57a2fbe18ab155dcea2f9bad4d4ffc1a299f4d4668e3a95b0ec4" Namespace="kube-system" Pod="coredns-668d6bf9bc-mx5qd" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-coredns--668d6bf9bc--mx5qd-eth0" Oct 13 00:28:28.350259 containerd[1881]: 2025-10-13 00:28:28.177 [INFO][4916] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c7a6fccc9ef57a2fbe18ab155dcea2f9bad4d4ffc1a299f4d4668e3a95b0ec4" HandleID="k8s-pod-network.6c7a6fccc9ef57a2fbe18ab155dcea2f9bad4d4ffc1a299f4d4668e3a95b0ec4" Workload="ci--4459.1.0--a--27183f81a1-k8s-coredns--668d6bf9bc--mx5qd-eth0" Oct 13 00:28:28.350442 containerd[1881]: 2025-10-13 00:28:28.177 [INFO][4916] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6c7a6fccc9ef57a2fbe18ab155dcea2f9bad4d4ffc1a299f4d4668e3a95b0ec4" HandleID="k8s-pod-network.6c7a6fccc9ef57a2fbe18ab155dcea2f9bad4d4ffc1a299f4d4668e3a95b0ec4" Workload="ci--4459.1.0--a--27183f81a1-k8s-coredns--668d6bf9bc--mx5qd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3640), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.1.0-a-27183f81a1", "pod":"coredns-668d6bf9bc-mx5qd", "timestamp":"2025-10-13 00:28:28.177384502 +0000 UTC"}, Hostname:"ci-4459.1.0-a-27183f81a1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:28:28.350442 containerd[1881]: 2025-10-13 00:28:28.177 [INFO][4916] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:28:28.350442 containerd[1881]: 2025-10-13 00:28:28.200 [INFO][4916] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:28:28.350442 containerd[1881]: 2025-10-13 00:28:28.200 [INFO][4916] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-a-27183f81a1' Oct 13 00:28:28.350442 containerd[1881]: 2025-10-13 00:28:28.280 [INFO][4916] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6c7a6fccc9ef57a2fbe18ab155dcea2f9bad4d4ffc1a299f4d4668e3a95b0ec4" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:28.350442 containerd[1881]: 2025-10-13 00:28:28.287 [INFO][4916] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:28.350442 containerd[1881]: 2025-10-13 00:28:28.291 [INFO][4916] ipam/ipam.go 511: Trying affinity for 192.168.71.128/26 host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:28.350442 containerd[1881]: 2025-10-13 00:28:28.292 [INFO][4916] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.128/26 host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:28.350442 containerd[1881]: 2025-10-13 00:28:28.294 [INFO][4916] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:28.350577 containerd[1881]: 2025-10-13 00:28:28.294 [INFO][4916] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.6c7a6fccc9ef57a2fbe18ab155dcea2f9bad4d4ffc1a299f4d4668e3a95b0ec4" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:28.350577 containerd[1881]: 2025-10-13 00:28:28.295 [INFO][4916] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6c7a6fccc9ef57a2fbe18ab155dcea2f9bad4d4ffc1a299f4d4668e3a95b0ec4 Oct 13 00:28:28.350577 containerd[1881]: 2025-10-13 00:28:28.306 [INFO][4916] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.6c7a6fccc9ef57a2fbe18ab155dcea2f9bad4d4ffc1a299f4d4668e3a95b0ec4" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:28.350577 containerd[1881]: 2025-10-13 00:28:28.315 [INFO][4916] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.71.132/26] block=192.168.71.128/26 handle="k8s-pod-network.6c7a6fccc9ef57a2fbe18ab155dcea2f9bad4d4ffc1a299f4d4668e3a95b0ec4" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:28.350577 containerd[1881]: 2025-10-13 00:28:28.315 [INFO][4916] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.132/26] handle="k8s-pod-network.6c7a6fccc9ef57a2fbe18ab155dcea2f9bad4d4ffc1a299f4d4668e3a95b0ec4" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:28.350577 containerd[1881]: 2025-10-13 00:28:28.315 [INFO][4916] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:28:28.350577 containerd[1881]: 2025-10-13 00:28:28.315 [INFO][4916] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.71.132/26] IPv6=[] ContainerID="6c7a6fccc9ef57a2fbe18ab155dcea2f9bad4d4ffc1a299f4d4668e3a95b0ec4" HandleID="k8s-pod-network.6c7a6fccc9ef57a2fbe18ab155dcea2f9bad4d4ffc1a299f4d4668e3a95b0ec4" Workload="ci--4459.1.0--a--27183f81a1-k8s-coredns--668d6bf9bc--mx5qd-eth0" Oct 13 00:28:28.351369 containerd[1881]: 2025-10-13 00:28:28.317 [INFO][4889] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6c7a6fccc9ef57a2fbe18ab155dcea2f9bad4d4ffc1a299f4d4668e3a95b0ec4" Namespace="kube-system" Pod="coredns-668d6bf9bc-mx5qd" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-coredns--668d6bf9bc--mx5qd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--27183f81a1-k8s-coredns--668d6bf9bc--mx5qd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7ce906df-24f0-40c1-9e22-eb83d0c34f2f", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 27, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-27183f81a1", ContainerID:"", Pod:"coredns-668d6bf9bc-mx5qd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0957ab9589b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:28:28.351369 containerd[1881]: 2025-10-13 00:28:28.317 [INFO][4889] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.132/32] ContainerID="6c7a6fccc9ef57a2fbe18ab155dcea2f9bad4d4ffc1a299f4d4668e3a95b0ec4" Namespace="kube-system" Pod="coredns-668d6bf9bc-mx5qd" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-coredns--668d6bf9bc--mx5qd-eth0" Oct 13 00:28:28.351369 containerd[1881]: 2025-10-13 00:28:28.317 [INFO][4889] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0957ab9589b ContainerID="6c7a6fccc9ef57a2fbe18ab155dcea2f9bad4d4ffc1a299f4d4668e3a95b0ec4" Namespace="kube-system" Pod="coredns-668d6bf9bc-mx5qd" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-coredns--668d6bf9bc--mx5qd-eth0" Oct 13 00:28:28.351369 containerd[1881]: 2025-10-13 00:28:28.329 [INFO][4889] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6c7a6fccc9ef57a2fbe18ab155dcea2f9bad4d4ffc1a299f4d4668e3a95b0ec4" Namespace="kube-system" Pod="coredns-668d6bf9bc-mx5qd" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-coredns--668d6bf9bc--mx5qd-eth0" Oct 13 00:28:28.351369 containerd[1881]: 2025-10-13 00:28:28.329 [INFO][4889] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6c7a6fccc9ef57a2fbe18ab155dcea2f9bad4d4ffc1a299f4d4668e3a95b0ec4" Namespace="kube-system" Pod="coredns-668d6bf9bc-mx5qd" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-coredns--668d6bf9bc--mx5qd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--27183f81a1-k8s-coredns--668d6bf9bc--mx5qd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7ce906df-24f0-40c1-9e22-eb83d0c34f2f", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 27, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-27183f81a1", ContainerID:"6c7a6fccc9ef57a2fbe18ab155dcea2f9bad4d4ffc1a299f4d4668e3a95b0ec4", Pod:"coredns-668d6bf9bc-mx5qd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0957ab9589b", MAC:"66:72:2f:27:7d:2f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:28:28.351369 containerd[1881]: 2025-10-13 00:28:28.346 [INFO][4889] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6c7a6fccc9ef57a2fbe18ab155dcea2f9bad4d4ffc1a299f4d4668e3a95b0ec4" Namespace="kube-system" Pod="coredns-668d6bf9bc-mx5qd" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-coredns--668d6bf9bc--mx5qd-eth0" Oct 13 00:28:28.379796 containerd[1881]: time="2025-10-13T00:28:28.379740756Z" level=info msg="connecting to shim 0d980e1c99d405f1c164192a5a588ae90cbe01b4004ef6601252fc948b0540da" address="unix:///run/containerd/s/ee2a7f9f4fd7470910858da1558a7822cc8224558736930afced32db1b4e29f9" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:28:28.407200 systemd[1]: Started cri-containerd-0d980e1c99d405f1c164192a5a588ae90cbe01b4004ef6601252fc948b0540da.scope - libcontainer container 0d980e1c99d405f1c164192a5a588ae90cbe01b4004ef6601252fc948b0540da. Oct 13 00:28:28.418272 containerd[1881]: time="2025-10-13T00:28:28.418232720Z" level=info msg="connecting to shim 6c7a6fccc9ef57a2fbe18ab155dcea2f9bad4d4ffc1a299f4d4668e3a95b0ec4" address="unix:///run/containerd/s/0b66fa06f8d1b14a2b37c85c9aace21612995c4cf8c41d8d554e6f710b8ec7e1" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:28:28.440021 systemd-networkd[1476]: cali809e6a2231d: Link UP Oct 13 00:28:28.440671 systemd-networkd[1476]: cali809e6a2231d: Gained carrier Oct 13 00:28:28.466168 systemd[1]: Started cri-containerd-6c7a6fccc9ef57a2fbe18ab155dcea2f9bad4d4ffc1a299f4d4668e3a95b0ec4.scope - libcontainer container 6c7a6fccc9ef57a2fbe18ab155dcea2f9bad4d4ffc1a299f4d4668e3a95b0ec4. Oct 13 00:28:28.469662 containerd[1881]: time="2025-10-13T00:28:28.469471111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b5nsr,Uid:4ed0fdc7-31ce-42c5-b3c9-46bf732b034a,Namespace:calico-system,Attempt:0,} returns sandbox id \"0d980e1c99d405f1c164192a5a588ae90cbe01b4004ef6601252fc948b0540da\"" Oct 13 00:28:28.470436 containerd[1881]: 2025-10-13 00:28:28.152 [INFO][4901] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--a--27183f81a1-k8s-calico--kube--controllers--844f59d67c--9hbnj-eth0 calico-kube-controllers-844f59d67c- calico-system 8a7eef07-5d01-44a9-aad7-187883c09c3b 854 0 2025-10-13 00:27:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:844f59d67c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.1.0-a-27183f81a1 calico-kube-controllers-844f59d67c-9hbnj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali809e6a2231d [] [] }} ContainerID="f6aa78c80cd67efb144f7bf24ea793e451365cb6d28a91c7951a6bc77890f8a5" Namespace="calico-system" Pod="calico-kube-controllers-844f59d67c-9hbnj" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--kube--controllers--844f59d67c--9hbnj-" Oct 13 00:28:28.470436 containerd[1881]: 2025-10-13 00:28:28.152 [INFO][4901] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f6aa78c80cd67efb144f7bf24ea793e451365cb6d28a91c7951a6bc77890f8a5" Namespace="calico-system" Pod="calico-kube-controllers-844f59d67c-9hbnj" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--kube--controllers--844f59d67c--9hbnj-eth0" Oct 13 00:28:28.470436 containerd[1881]: 2025-10-13 00:28:28.180 [INFO][4929] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f6aa78c80cd67efb144f7bf24ea793e451365cb6d28a91c7951a6bc77890f8a5" HandleID="k8s-pod-network.f6aa78c80cd67efb144f7bf24ea793e451365cb6d28a91c7951a6bc77890f8a5" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--kube--controllers--844f59d67c--9hbnj-eth0" Oct 13 00:28:28.470436 containerd[1881]: 2025-10-13 00:28:28.181 [INFO][4929] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f6aa78c80cd67efb144f7bf24ea793e451365cb6d28a91c7951a6bc77890f8a5" HandleID="k8s-pod-network.f6aa78c80cd67efb144f7bf24ea793e451365cb6d28a91c7951a6bc77890f8a5" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--kube--controllers--844f59d67c--9hbnj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3600), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.1.0-a-27183f81a1", "pod":"calico-kube-controllers-844f59d67c-9hbnj", "timestamp":"2025-10-13 00:28:28.18061077 +0000 UTC"}, Hostname:"ci-4459.1.0-a-27183f81a1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:28:28.470436 containerd[1881]: 2025-10-13 00:28:28.181 [INFO][4929] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:28:28.470436 containerd[1881]: 2025-10-13 00:28:28.315 [INFO][4929] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:28:28.470436 containerd[1881]: 2025-10-13 00:28:28.315 [INFO][4929] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-a-27183f81a1' Oct 13 00:28:28.470436 containerd[1881]: 2025-10-13 00:28:28.374 [INFO][4929] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f6aa78c80cd67efb144f7bf24ea793e451365cb6d28a91c7951a6bc77890f8a5" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:28.470436 containerd[1881]: 2025-10-13 00:28:28.388 [INFO][4929] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:28.470436 containerd[1881]: 2025-10-13 00:28:28.395 [INFO][4929] ipam/ipam.go 511: Trying affinity for 192.168.71.128/26 host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:28.470436 containerd[1881]: 2025-10-13 00:28:28.397 [INFO][4929] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.128/26 host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:28.470436 containerd[1881]: 2025-10-13 00:28:28.402 [INFO][4929] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:28.470436 containerd[1881]: 2025-10-13 00:28:28.402 [INFO][4929] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.f6aa78c80cd67efb144f7bf24ea793e451365cb6d28a91c7951a6bc77890f8a5" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:28.470436 containerd[1881]: 2025-10-13 00:28:28.406 [INFO][4929] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f6aa78c80cd67efb144f7bf24ea793e451365cb6d28a91c7951a6bc77890f8a5 Oct 13 00:28:28.470436 containerd[1881]: 2025-10-13 00:28:28.416 [INFO][4929] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.f6aa78c80cd67efb144f7bf24ea793e451365cb6d28a91c7951a6bc77890f8a5" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:28.470436 containerd[1881]: 2025-10-13 00:28:28.430 [INFO][4929] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.71.133/26] block=192.168.71.128/26 handle="k8s-pod-network.f6aa78c80cd67efb144f7bf24ea793e451365cb6d28a91c7951a6bc77890f8a5" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:28.470436 containerd[1881]: 2025-10-13 00:28:28.430 [INFO][4929] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.133/26] handle="k8s-pod-network.f6aa78c80cd67efb144f7bf24ea793e451365cb6d28a91c7951a6bc77890f8a5" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:28.470436 containerd[1881]: 2025-10-13 00:28:28.430 [INFO][4929] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:28:28.470436 containerd[1881]: 2025-10-13 00:28:28.431 [INFO][4929] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.71.133/26] IPv6=[] ContainerID="f6aa78c80cd67efb144f7bf24ea793e451365cb6d28a91c7951a6bc77890f8a5" HandleID="k8s-pod-network.f6aa78c80cd67efb144f7bf24ea793e451365cb6d28a91c7951a6bc77890f8a5" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--kube--controllers--844f59d67c--9hbnj-eth0" Oct 13 00:28:28.471224 containerd[1881]: 2025-10-13 00:28:28.432 [INFO][4901] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f6aa78c80cd67efb144f7bf24ea793e451365cb6d28a91c7951a6bc77890f8a5" Namespace="calico-system" Pod="calico-kube-controllers-844f59d67c-9hbnj" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--kube--controllers--844f59d67c--9hbnj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--27183f81a1-k8s-calico--kube--controllers--844f59d67c--9hbnj-eth0", GenerateName:"calico-kube-controllers-844f59d67c-", Namespace:"calico-system", SelfLink:"", UID:"8a7eef07-5d01-44a9-aad7-187883c09c3b", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 27, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"844f59d67c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-27183f81a1", ContainerID:"", Pod:"calico-kube-controllers-844f59d67c-9hbnj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.71.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali809e6a2231d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:28:28.471224 containerd[1881]: 2025-10-13 00:28:28.433 [INFO][4901] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.133/32] ContainerID="f6aa78c80cd67efb144f7bf24ea793e451365cb6d28a91c7951a6bc77890f8a5" Namespace="calico-system" Pod="calico-kube-controllers-844f59d67c-9hbnj" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--kube--controllers--844f59d67c--9hbnj-eth0" Oct 13 00:28:28.471224 containerd[1881]: 2025-10-13 00:28:28.433 [INFO][4901] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali809e6a2231d ContainerID="f6aa78c80cd67efb144f7bf24ea793e451365cb6d28a91c7951a6bc77890f8a5" Namespace="calico-system" Pod="calico-kube-controllers-844f59d67c-9hbnj" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--kube--controllers--844f59d67c--9hbnj-eth0" Oct 13 00:28:28.471224 containerd[1881]: 2025-10-13 00:28:28.441 [INFO][4901] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f6aa78c80cd67efb144f7bf24ea793e451365cb6d28a91c7951a6bc77890f8a5" Namespace="calico-system" Pod="calico-kube-controllers-844f59d67c-9hbnj" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--kube--controllers--844f59d67c--9hbnj-eth0" Oct 13 00:28:28.471224 containerd[1881]: 2025-10-13 00:28:28.442 [INFO][4901] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f6aa78c80cd67efb144f7bf24ea793e451365cb6d28a91c7951a6bc77890f8a5" Namespace="calico-system" Pod="calico-kube-controllers-844f59d67c-9hbnj" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--kube--controllers--844f59d67c--9hbnj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--27183f81a1-k8s-calico--kube--controllers--844f59d67c--9hbnj-eth0", GenerateName:"calico-kube-controllers-844f59d67c-", Namespace:"calico-system", SelfLink:"", UID:"8a7eef07-5d01-44a9-aad7-187883c09c3b", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 27, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"844f59d67c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-27183f81a1", ContainerID:"f6aa78c80cd67efb144f7bf24ea793e451365cb6d28a91c7951a6bc77890f8a5", Pod:"calico-kube-controllers-844f59d67c-9hbnj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.71.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali809e6a2231d", MAC:"6e:51:af:78:f7:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:28:28.471224 containerd[1881]: 2025-10-13 00:28:28.465 [INFO][4901] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f6aa78c80cd67efb144f7bf24ea793e451365cb6d28a91c7951a6bc77890f8a5" Namespace="calico-system" Pod="calico-kube-controllers-844f59d67c-9hbnj" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--kube--controllers--844f59d67c--9hbnj-eth0" Oct 13 00:28:28.514123 containerd[1881]: time="2025-10-13T00:28:28.514080968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mx5qd,Uid:7ce906df-24f0-40c1-9e22-eb83d0c34f2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c7a6fccc9ef57a2fbe18ab155dcea2f9bad4d4ffc1a299f4d4668e3a95b0ec4\"" Oct 13 00:28:28.517466 containerd[1881]: time="2025-10-13T00:28:28.517432649Z" level=info msg="CreateContainer within sandbox \"6c7a6fccc9ef57a2fbe18ab155dcea2f9bad4d4ffc1a299f4d4668e3a95b0ec4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 00:28:28.527544 containerd[1881]: time="2025-10-13T00:28:28.527165383Z" level=info msg="connecting to shim f6aa78c80cd67efb144f7bf24ea793e451365cb6d28a91c7951a6bc77890f8a5" address="unix:///run/containerd/s/f0a1f98cb9d0b7ee262b4fbedecab84fdfdb6bd3bf3659000c417d0325693120" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:28:28.541682 containerd[1881]: time="2025-10-13T00:28:28.541564475Z" level=info msg="Container 8514767745eca64d72b85cd867f2b0785ea32f9048d5357c4ed5dc869e44a817: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:28:28.545057 systemd[1]: Started cri-containerd-f6aa78c80cd67efb144f7bf24ea793e451365cb6d28a91c7951a6bc77890f8a5.scope - libcontainer container f6aa78c80cd67efb144f7bf24ea793e451365cb6d28a91c7951a6bc77890f8a5. Oct 13 00:28:28.682786 containerd[1881]: time="2025-10-13T00:28:28.682549246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-844f59d67c-9hbnj,Uid:8a7eef07-5d01-44a9-aad7-187883c09c3b,Namespace:calico-system,Attempt:0,} returns sandbox id \"f6aa78c80cd67efb144f7bf24ea793e451365cb6d28a91c7951a6bc77890f8a5\"" Oct 13 00:28:28.744723 containerd[1881]: time="2025-10-13T00:28:28.744501621Z" level=info msg="CreateContainer within sandbox \"6c7a6fccc9ef57a2fbe18ab155dcea2f9bad4d4ffc1a299f4d4668e3a95b0ec4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8514767745eca64d72b85cd867f2b0785ea32f9048d5357c4ed5dc869e44a817\"" Oct 13 00:28:28.746278 containerd[1881]: time="2025-10-13T00:28:28.746248711Z" level=info msg="StartContainer for \"8514767745eca64d72b85cd867f2b0785ea32f9048d5357c4ed5dc869e44a817\"" Oct 13 00:28:28.747750 containerd[1881]: time="2025-10-13T00:28:28.747661351Z" level=info msg="connecting to shim 8514767745eca64d72b85cd867f2b0785ea32f9048d5357c4ed5dc869e44a817" address="unix:///run/containerd/s/0b66fa06f8d1b14a2b37c85c9aace21612995c4cf8c41d8d554e6f710b8ec7e1" protocol=ttrpc version=3 Oct 13 00:28:28.771081 systemd[1]: Started cri-containerd-8514767745eca64d72b85cd867f2b0785ea32f9048d5357c4ed5dc869e44a817.scope - libcontainer container 8514767745eca64d72b85cd867f2b0785ea32f9048d5357c4ed5dc869e44a817. Oct 13 00:28:29.062525 containerd[1881]: time="2025-10-13T00:28:29.062480683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lknh,Uid:6c8898f3-e8e8-4ac1-bc62-12aa1248ba56,Namespace:kube-system,Attempt:0,}" Oct 13 00:28:29.065538 containerd[1881]: time="2025-10-13T00:28:29.065384445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-745499777d-j7f7t,Uid:6541987e-4c6e-48c0-877c-85d5d35cabc1,Namespace:calico-apiserver,Attempt:0,}" Oct 13 00:28:29.726081 systemd-networkd[1476]: calid407bfb6a14: Gained IPv6LL Oct 13 00:28:30.366110 systemd-networkd[1476]: cali0957ab9589b: Gained IPv6LL Oct 13 00:28:32.377479 containerd[1881]: time="2025-10-13T00:28:30.062446087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4d7db98f-gx5fl,Uid:dd6494fc-6077-4e51-941e-0d0f0b6a8344,Namespace:calico-apiserver,Attempt:0,}" Oct 13 00:28:30.494092 systemd-networkd[1476]: cali809e6a2231d: Gained IPv6LL Oct 13 00:28:33.436218 containerd[1881]: time="2025-10-13T00:28:33.436191291Z" level=info msg="StartContainer for \"8514767745eca64d72b85cd867f2b0785ea32f9048d5357c4ed5dc869e44a817\" returns successfully" Oct 13 00:28:33.481571 containerd[1881]: time="2025-10-13T00:28:33.447034510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4d7db98f-ctlg8,Uid:d5df1242-3535-480f-83ed-f48fbf6f9e8f,Namespace:calico-apiserver,Attempt:0,}" Oct 13 00:28:33.481633 kubelet[3373]: E1013 00:28:33.440573 3373 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.38s" Oct 13 00:28:33.720905 systemd-networkd[1476]: calif9611cc5089: Link UP Oct 13 00:28:33.721210 systemd-networkd[1476]: calif9611cc5089: Gained carrier Oct 13 00:28:33.743452 containerd[1881]: 2025-10-13 00:28:33.664 [INFO][5156] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--745499777d--j7f7t-eth0 calico-apiserver-745499777d- calico-apiserver 6541987e-4c6e-48c0-877c-85d5d35cabc1 853 0 2025-10-13 00:27:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:745499777d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.1.0-a-27183f81a1 calico-apiserver-745499777d-j7f7t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif9611cc5089 [] [] }} ContainerID="29929191cff5b852d575f16e7248558502e1f1fabe0a56e6c076fc8cb321786c" Namespace="calico-apiserver" Pod="calico-apiserver-745499777d-j7f7t" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--745499777d--j7f7t-" Oct 13 00:28:33.743452 containerd[1881]: 2025-10-13 00:28:33.664 [INFO][5156] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="29929191cff5b852d575f16e7248558502e1f1fabe0a56e6c076fc8cb321786c" Namespace="calico-apiserver" Pod="calico-apiserver-745499777d-j7f7t" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--745499777d--j7f7t-eth0" Oct 13 00:28:33.743452 containerd[1881]: 2025-10-13 00:28:33.682 [INFO][5168] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="29929191cff5b852d575f16e7248558502e1f1fabe0a56e6c076fc8cb321786c" HandleID="k8s-pod-network.29929191cff5b852d575f16e7248558502e1f1fabe0a56e6c076fc8cb321786c" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--745499777d--j7f7t-eth0" Oct 13 00:28:33.743452 containerd[1881]: 2025-10-13 00:28:33.682 [INFO][5168] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="29929191cff5b852d575f16e7248558502e1f1fabe0a56e6c076fc8cb321786c" HandleID="k8s-pod-network.29929191cff5b852d575f16e7248558502e1f1fabe0a56e6c076fc8cb321786c" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--745499777d--j7f7t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3750), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.1.0-a-27183f81a1", "pod":"calico-apiserver-745499777d-j7f7t", "timestamp":"2025-10-13 00:28:33.682762861 +0000 UTC"}, Hostname:"ci-4459.1.0-a-27183f81a1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:28:33.743452 containerd[1881]: 2025-10-13 00:28:33.683 [INFO][5168] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:28:33.743452 containerd[1881]: 2025-10-13 00:28:33.683 [INFO][5168] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:28:33.743452 containerd[1881]: 2025-10-13 00:28:33.683 [INFO][5168] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-a-27183f81a1' Oct 13 00:28:33.743452 containerd[1881]: 2025-10-13 00:28:33.687 [INFO][5168] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.29929191cff5b852d575f16e7248558502e1f1fabe0a56e6c076fc8cb321786c" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:33.743452 containerd[1881]: 2025-10-13 00:28:33.692 [INFO][5168] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:33.743452 containerd[1881]: 2025-10-13 00:28:33.696 [INFO][5168] ipam/ipam.go 511: Trying affinity for 192.168.71.128/26 host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:33.743452 containerd[1881]: 2025-10-13 00:28:33.697 [INFO][5168] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.128/26 host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:33.743452 containerd[1881]: 2025-10-13 00:28:33.699 [INFO][5168] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:33.743452 containerd[1881]: 2025-10-13 00:28:33.699 [INFO][5168] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.29929191cff5b852d575f16e7248558502e1f1fabe0a56e6c076fc8cb321786c" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:33.743452 containerd[1881]: 2025-10-13 00:28:33.700 [INFO][5168] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.29929191cff5b852d575f16e7248558502e1f1fabe0a56e6c076fc8cb321786c Oct 13 00:28:33.743452 containerd[1881]: 2025-10-13 00:28:33.705 [INFO][5168] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.29929191cff5b852d575f16e7248558502e1f1fabe0a56e6c076fc8cb321786c" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:33.743452 containerd[1881]: 2025-10-13 00:28:33.716 [INFO][5168] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.71.134/26] block=192.168.71.128/26 handle="k8s-pod-network.29929191cff5b852d575f16e7248558502e1f1fabe0a56e6c076fc8cb321786c" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:33.743452 containerd[1881]: 2025-10-13 00:28:33.716 [INFO][5168] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.134/26] handle="k8s-pod-network.29929191cff5b852d575f16e7248558502e1f1fabe0a56e6c076fc8cb321786c" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:33.743452 containerd[1881]: 2025-10-13 00:28:33.716 [INFO][5168] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:28:33.743452 containerd[1881]: 2025-10-13 00:28:33.716 [INFO][5168] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.71.134/26] IPv6=[] ContainerID="29929191cff5b852d575f16e7248558502e1f1fabe0a56e6c076fc8cb321786c" HandleID="k8s-pod-network.29929191cff5b852d575f16e7248558502e1f1fabe0a56e6c076fc8cb321786c" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--745499777d--j7f7t-eth0" Oct 13 00:28:33.745097 containerd[1881]: 2025-10-13 00:28:33.718 [INFO][5156] cni-plugin/k8s.go 418: Populated endpoint ContainerID="29929191cff5b852d575f16e7248558502e1f1fabe0a56e6c076fc8cb321786c" Namespace="calico-apiserver" Pod="calico-apiserver-745499777d-j7f7t" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--745499777d--j7f7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--745499777d--j7f7t-eth0", GenerateName:"calico-apiserver-745499777d-", Namespace:"calico-apiserver", SelfLink:"", UID:"6541987e-4c6e-48c0-877c-85d5d35cabc1", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 27, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"745499777d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-27183f81a1", ContainerID:"", Pod:"calico-apiserver-745499777d-j7f7t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif9611cc5089", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:28:33.745097 containerd[1881]: 2025-10-13 00:28:33.718 [INFO][5156] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.134/32] ContainerID="29929191cff5b852d575f16e7248558502e1f1fabe0a56e6c076fc8cb321786c" Namespace="calico-apiserver" Pod="calico-apiserver-745499777d-j7f7t" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--745499777d--j7f7t-eth0" Oct 13 00:28:33.745097 containerd[1881]: 2025-10-13 00:28:33.718 [INFO][5156] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif9611cc5089 ContainerID="29929191cff5b852d575f16e7248558502e1f1fabe0a56e6c076fc8cb321786c" Namespace="calico-apiserver" Pod="calico-apiserver-745499777d-j7f7t" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--745499777d--j7f7t-eth0" Oct 13 00:28:33.745097 containerd[1881]: 2025-10-13 00:28:33.722 [INFO][5156] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="29929191cff5b852d575f16e7248558502e1f1fabe0a56e6c076fc8cb321786c" Namespace="calico-apiserver" Pod="calico-apiserver-745499777d-j7f7t" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--745499777d--j7f7t-eth0" Oct 13 00:28:33.745097 containerd[1881]: 2025-10-13 00:28:33.723 [INFO][5156] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="29929191cff5b852d575f16e7248558502e1f1fabe0a56e6c076fc8cb321786c" Namespace="calico-apiserver" Pod="calico-apiserver-745499777d-j7f7t" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--745499777d--j7f7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--745499777d--j7f7t-eth0", GenerateName:"calico-apiserver-745499777d-", Namespace:"calico-apiserver", SelfLink:"", UID:"6541987e-4c6e-48c0-877c-85d5d35cabc1", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 27, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"745499777d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-27183f81a1", ContainerID:"29929191cff5b852d575f16e7248558502e1f1fabe0a56e6c076fc8cb321786c", Pod:"calico-apiserver-745499777d-j7f7t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif9611cc5089", MAC:"32:44:ad:c7:29:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:28:33.745097 containerd[1881]: 2025-10-13 00:28:33.738 [INFO][5156] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="29929191cff5b852d575f16e7248558502e1f1fabe0a56e6c076fc8cb321786c" Namespace="calico-apiserver" Pod="calico-apiserver-745499777d-j7f7t" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--745499777d--j7f7t-eth0" Oct 13 00:28:33.866803 systemd-networkd[1476]: cali2e82ea46fd6: Link UP Oct 13 00:28:33.867274 systemd-networkd[1476]: cali2e82ea46fd6: Gained carrier Oct 13 00:28:33.883165 containerd[1881]: 2025-10-13 00:28:33.803 [INFO][5185] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--a--27183f81a1-k8s-coredns--668d6bf9bc--7lknh-eth0 coredns-668d6bf9bc- kube-system 6c8898f3-e8e8-4ac1-bc62-12aa1248ba56 842 0 2025-10-13 00:27:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.1.0-a-27183f81a1 coredns-668d6bf9bc-7lknh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2e82ea46fd6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="55d47cc0e90ca1ec2816dd53222cb245ad261837e055679febfd933d84d67d2e" Namespace="kube-system" Pod="coredns-668d6bf9bc-7lknh" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-coredns--668d6bf9bc--7lknh-" Oct 13 00:28:33.883165 containerd[1881]: 2025-10-13 00:28:33.804 [INFO][5185] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="55d47cc0e90ca1ec2816dd53222cb245ad261837e055679febfd933d84d67d2e" Namespace="kube-system" Pod="coredns-668d6bf9bc-7lknh" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-coredns--668d6bf9bc--7lknh-eth0" Oct 13 00:28:33.883165 containerd[1881]: 2025-10-13 00:28:33.823 [INFO][5197] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="55d47cc0e90ca1ec2816dd53222cb245ad261837e055679febfd933d84d67d2e" HandleID="k8s-pod-network.55d47cc0e90ca1ec2816dd53222cb245ad261837e055679febfd933d84d67d2e" Workload="ci--4459.1.0--a--27183f81a1-k8s-coredns--668d6bf9bc--7lknh-eth0" Oct 13 00:28:33.883165 containerd[1881]: 2025-10-13 00:28:33.823 [INFO][5197] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="55d47cc0e90ca1ec2816dd53222cb245ad261837e055679febfd933d84d67d2e" HandleID="k8s-pod-network.55d47cc0e90ca1ec2816dd53222cb245ad261837e055679febfd933d84d67d2e" Workload="ci--4459.1.0--a--27183f81a1-k8s-coredns--668d6bf9bc--7lknh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b010), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.1.0-a-27183f81a1", "pod":"coredns-668d6bf9bc-7lknh", "timestamp":"2025-10-13 00:28:33.823726679 +0000 UTC"}, Hostname:"ci-4459.1.0-a-27183f81a1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:28:33.883165 containerd[1881]: 2025-10-13 00:28:33.823 [INFO][5197] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:28:33.883165 containerd[1881]: 2025-10-13 00:28:33.823 [INFO][5197] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:28:33.883165 containerd[1881]: 2025-10-13 00:28:33.823 [INFO][5197] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-a-27183f81a1' Oct 13 00:28:33.883165 containerd[1881]: 2025-10-13 00:28:33.837 [INFO][5197] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.55d47cc0e90ca1ec2816dd53222cb245ad261837e055679febfd933d84d67d2e" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:33.883165 containerd[1881]: 2025-10-13 00:28:33.841 [INFO][5197] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:33.883165 containerd[1881]: 2025-10-13 00:28:33.844 [INFO][5197] ipam/ipam.go 511: Trying affinity for 192.168.71.128/26 host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:33.883165 containerd[1881]: 2025-10-13 00:28:33.845 [INFO][5197] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.128/26 host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:33.883165 containerd[1881]: 2025-10-13 00:28:33.846 [INFO][5197] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:33.883165 containerd[1881]: 2025-10-13 00:28:33.846 [INFO][5197] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.55d47cc0e90ca1ec2816dd53222cb245ad261837e055679febfd933d84d67d2e" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:33.883165 containerd[1881]: 2025-10-13 00:28:33.847 [INFO][5197] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.55d47cc0e90ca1ec2816dd53222cb245ad261837e055679febfd933d84d67d2e Oct 13 00:28:33.883165 containerd[1881]: 2025-10-13 00:28:33.851 [INFO][5197] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.55d47cc0e90ca1ec2816dd53222cb245ad261837e055679febfd933d84d67d2e" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:33.883165 containerd[1881]: 2025-10-13 00:28:33.861 [INFO][5197] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.71.135/26] block=192.168.71.128/26 handle="k8s-pod-network.55d47cc0e90ca1ec2816dd53222cb245ad261837e055679febfd933d84d67d2e" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:33.883165 containerd[1881]: 2025-10-13 00:28:33.861 [INFO][5197] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.135/26] handle="k8s-pod-network.55d47cc0e90ca1ec2816dd53222cb245ad261837e055679febfd933d84d67d2e" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:33.883165 containerd[1881]: 2025-10-13 00:28:33.861 [INFO][5197] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:28:33.883165 containerd[1881]: 2025-10-13 00:28:33.861 [INFO][5197] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.71.135/26] IPv6=[] ContainerID="55d47cc0e90ca1ec2816dd53222cb245ad261837e055679febfd933d84d67d2e" HandleID="k8s-pod-network.55d47cc0e90ca1ec2816dd53222cb245ad261837e055679febfd933d84d67d2e" Workload="ci--4459.1.0--a--27183f81a1-k8s-coredns--668d6bf9bc--7lknh-eth0" Oct 13 00:28:33.883567 containerd[1881]: 2025-10-13 00:28:33.863 [INFO][5185] cni-plugin/k8s.go 418: Populated endpoint ContainerID="55d47cc0e90ca1ec2816dd53222cb245ad261837e055679febfd933d84d67d2e" Namespace="kube-system" Pod="coredns-668d6bf9bc-7lknh" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-coredns--668d6bf9bc--7lknh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--27183f81a1-k8s-coredns--668d6bf9bc--7lknh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6c8898f3-e8e8-4ac1-bc62-12aa1248ba56", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 27, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-27183f81a1", ContainerID:"", Pod:"coredns-668d6bf9bc-7lknh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2e82ea46fd6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:28:33.883567 containerd[1881]: 2025-10-13 00:28:33.863 [INFO][5185] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.135/32] ContainerID="55d47cc0e90ca1ec2816dd53222cb245ad261837e055679febfd933d84d67d2e" Namespace="kube-system" Pod="coredns-668d6bf9bc-7lknh" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-coredns--668d6bf9bc--7lknh-eth0" Oct 13 00:28:33.883567 containerd[1881]: 2025-10-13 00:28:33.863 [INFO][5185] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e82ea46fd6 ContainerID="55d47cc0e90ca1ec2816dd53222cb245ad261837e055679febfd933d84d67d2e" Namespace="kube-system" Pod="coredns-668d6bf9bc-7lknh" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-coredns--668d6bf9bc--7lknh-eth0" Oct 13 00:28:33.883567 containerd[1881]: 2025-10-13 00:28:33.867 [INFO][5185] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="55d47cc0e90ca1ec2816dd53222cb245ad261837e055679febfd933d84d67d2e" Namespace="kube-system" Pod="coredns-668d6bf9bc-7lknh" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-coredns--668d6bf9bc--7lknh-eth0" Oct 13 00:28:33.883567 containerd[1881]: 2025-10-13 00:28:33.868 [INFO][5185] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="55d47cc0e90ca1ec2816dd53222cb245ad261837e055679febfd933d84d67d2e" Namespace="kube-system" Pod="coredns-668d6bf9bc-7lknh" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-coredns--668d6bf9bc--7lknh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--27183f81a1-k8s-coredns--668d6bf9bc--7lknh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6c8898f3-e8e8-4ac1-bc62-12aa1248ba56", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 27, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-27183f81a1", ContainerID:"55d47cc0e90ca1ec2816dd53222cb245ad261837e055679febfd933d84d67d2e", Pod:"coredns-668d6bf9bc-7lknh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2e82ea46fd6", MAC:"ee:b6:ce:39:73:4c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:28:33.883567 containerd[1881]: 2025-10-13 00:28:33.880 [INFO][5185] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="55d47cc0e90ca1ec2816dd53222cb245ad261837e055679febfd933d84d67d2e" Namespace="kube-system" Pod="coredns-668d6bf9bc-7lknh" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-coredns--668d6bf9bc--7lknh-eth0" Oct 13 00:28:34.021422 systemd-networkd[1476]: cali223f2308102: Link UP Oct 13 00:28:34.023040 systemd-networkd[1476]: cali223f2308102: Gained carrier Oct 13 00:28:34.042734 containerd[1881]: 2025-10-13 00:28:33.961 [INFO][5213] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--gx5fl-eth0 calico-apiserver-6d4d7db98f- calico-apiserver dd6494fc-6077-4e51-941e-0d0f0b6a8344 851 0 2025-10-13 00:27:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d4d7db98f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.1.0-a-27183f81a1 calico-apiserver-6d4d7db98f-gx5fl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali223f2308102 [] [] }} ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" Namespace="calico-apiserver" Pod="calico-apiserver-6d4d7db98f-gx5fl" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--gx5fl-" Oct 13 00:28:34.042734 containerd[1881]: 2025-10-13 00:28:33.962 [INFO][5213] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" Namespace="calico-apiserver" Pod="calico-apiserver-6d4d7db98f-gx5fl" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--gx5fl-eth0" Oct 13 00:28:34.042734 containerd[1881]: 2025-10-13 00:28:33.982 [INFO][5225] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" HandleID="k8s-pod-network.b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--gx5fl-eth0" Oct 13 00:28:34.042734 containerd[1881]: 2025-10-13 00:28:33.982 [INFO][5225] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" HandleID="k8s-pod-network.b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--gx5fl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b1a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.1.0-a-27183f81a1", "pod":"calico-apiserver-6d4d7db98f-gx5fl", "timestamp":"2025-10-13 00:28:33.982273639 +0000 UTC"}, Hostname:"ci-4459.1.0-a-27183f81a1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:28:34.042734 containerd[1881]: 2025-10-13 00:28:33.982 [INFO][5225] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:28:34.042734 containerd[1881]: 2025-10-13 00:28:33.982 [INFO][5225] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:28:34.042734 containerd[1881]: 2025-10-13 00:28:33.982 [INFO][5225] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-a-27183f81a1' Oct 13 00:28:34.042734 containerd[1881]: 2025-10-13 00:28:33.987 [INFO][5225] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:34.042734 containerd[1881]: 2025-10-13 00:28:33.990 [INFO][5225] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:34.042734 containerd[1881]: 2025-10-13 00:28:33.993 [INFO][5225] ipam/ipam.go 511: Trying affinity for 192.168.71.128/26 host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:34.042734 containerd[1881]: 2025-10-13 00:28:33.995 [INFO][5225] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.128/26 host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:34.042734 containerd[1881]: 2025-10-13 00:28:33.996 [INFO][5225] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:34.042734 containerd[1881]: 2025-10-13 00:28:33.996 [INFO][5225] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:34.042734 containerd[1881]: 2025-10-13 00:28:33.997 [INFO][5225] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1 Oct 13 00:28:34.042734 containerd[1881]: 2025-10-13 00:28:34.005 [INFO][5225] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:34.042734 containerd[1881]: 2025-10-13 00:28:34.016 [INFO][5225] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.71.136/26] block=192.168.71.128/26 handle="k8s-pod-network.b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:34.042734 containerd[1881]: 2025-10-13 00:28:34.016 [INFO][5225] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.136/26] handle="k8s-pod-network.b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:34.042734 containerd[1881]: 2025-10-13 00:28:34.016 [INFO][5225] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:28:34.042734 containerd[1881]: 2025-10-13 00:28:34.016 [INFO][5225] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.71.136/26] IPv6=[] ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" HandleID="k8s-pod-network.b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--gx5fl-eth0" Oct 13 00:28:34.043274 containerd[1881]: 2025-10-13 00:28:34.018 [INFO][5213] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" Namespace="calico-apiserver" Pod="calico-apiserver-6d4d7db98f-gx5fl" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--gx5fl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--gx5fl-eth0", GenerateName:"calico-apiserver-6d4d7db98f-", Namespace:"calico-apiserver", SelfLink:"", UID:"dd6494fc-6077-4e51-941e-0d0f0b6a8344", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 27, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4d7db98f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-27183f81a1", ContainerID:"", Pod:"calico-apiserver-6d4d7db98f-gx5fl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali223f2308102", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:28:34.043274 containerd[1881]: 2025-10-13 00:28:34.018 [INFO][5213] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.136/32] ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" Namespace="calico-apiserver" Pod="calico-apiserver-6d4d7db98f-gx5fl" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--gx5fl-eth0" Oct 13 00:28:34.043274 containerd[1881]: 2025-10-13 00:28:34.018 [INFO][5213] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali223f2308102 ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" Namespace="calico-apiserver" Pod="calico-apiserver-6d4d7db98f-gx5fl" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--gx5fl-eth0" Oct 13 00:28:34.043274 containerd[1881]: 2025-10-13 00:28:34.023 [INFO][5213] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" Namespace="calico-apiserver" Pod="calico-apiserver-6d4d7db98f-gx5fl" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--gx5fl-eth0" Oct 13 00:28:34.043274 containerd[1881]: 2025-10-13 00:28:34.025 [INFO][5213] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" Namespace="calico-apiserver" Pod="calico-apiserver-6d4d7db98f-gx5fl" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--gx5fl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--gx5fl-eth0", GenerateName:"calico-apiserver-6d4d7db98f-", Namespace:"calico-apiserver", SelfLink:"", UID:"dd6494fc-6077-4e51-941e-0d0f0b6a8344", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 27, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4d7db98f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-27183f81a1", ContainerID:"b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1", Pod:"calico-apiserver-6d4d7db98f-gx5fl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali223f2308102", MAC:"be:6d:05:20:9e:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:28:34.043274 containerd[1881]: 2025-10-13 00:28:34.041 [INFO][5213] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" Namespace="calico-apiserver" Pod="calico-apiserver-6d4d7db98f-gx5fl" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--gx5fl-eth0" Oct 13 00:28:34.167658 systemd-networkd[1476]: calib5bffcc03ab: Link UP Oct 13 00:28:34.168307 systemd-networkd[1476]: calib5bffcc03ab: Gained carrier Oct 13 00:28:34.189818 containerd[1881]: 2025-10-13 00:28:34.107 [INFO][5241] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--ctlg8-eth0 calico-apiserver-6d4d7db98f- calico-apiserver d5df1242-3535-480f-83ed-f48fbf6f9e8f 848 0 2025-10-13 00:27:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d4d7db98f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.1.0-a-27183f81a1 calico-apiserver-6d4d7db98f-ctlg8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib5bffcc03ab [] [] }} ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" Namespace="calico-apiserver" Pod="calico-apiserver-6d4d7db98f-ctlg8" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--ctlg8-" Oct 13 00:28:34.189818 containerd[1881]: 2025-10-13 00:28:34.107 [INFO][5241] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" Namespace="calico-apiserver" Pod="calico-apiserver-6d4d7db98f-ctlg8" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--ctlg8-eth0" Oct 13 00:28:34.189818 containerd[1881]: 2025-10-13 00:28:34.124 [INFO][5253] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" HandleID="k8s-pod-network.5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--ctlg8-eth0" Oct 13 00:28:34.189818 containerd[1881]: 2025-10-13 00:28:34.124 [INFO][5253] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" HandleID="k8s-pod-network.5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--ctlg8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b270), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.1.0-a-27183f81a1", "pod":"calico-apiserver-6d4d7db98f-ctlg8", "timestamp":"2025-10-13 00:28:34.124781789 +0000 UTC"}, Hostname:"ci-4459.1.0-a-27183f81a1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:28:34.189818 containerd[1881]: 2025-10-13 00:28:34.125 [INFO][5253] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:28:34.189818 containerd[1881]: 2025-10-13 00:28:34.125 [INFO][5253] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:28:34.189818 containerd[1881]: 2025-10-13 00:28:34.125 [INFO][5253] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-a-27183f81a1' Oct 13 00:28:34.189818 containerd[1881]: 2025-10-13 00:28:34.129 [INFO][5253] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:34.189818 containerd[1881]: 2025-10-13 00:28:34.133 [INFO][5253] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:34.189818 containerd[1881]: 2025-10-13 00:28:34.139 [INFO][5253] ipam/ipam.go 511: Trying affinity for 192.168.71.128/26 host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:34.189818 containerd[1881]: 2025-10-13 00:28:34.140 [INFO][5253] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.128/26 host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:34.189818 containerd[1881]: 2025-10-13 00:28:34.142 [INFO][5253] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:34.189818 containerd[1881]: 2025-10-13 00:28:34.142 [INFO][5253] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:34.189818 containerd[1881]: 2025-10-13 00:28:34.143 [INFO][5253] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8 Oct 13 00:28:34.189818 containerd[1881]: 2025-10-13 00:28:34.148 [INFO][5253] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:34.189818 containerd[1881]: 2025-10-13 00:28:34.159 [INFO][5253] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.71.137/26] block=192.168.71.128/26 handle="k8s-pod-network.5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:34.189818 containerd[1881]: 2025-10-13 00:28:34.159 [INFO][5253] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.137/26] handle="k8s-pod-network.5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:28:34.189818 containerd[1881]: 2025-10-13 00:28:34.160 [INFO][5253] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:28:34.189818 containerd[1881]: 2025-10-13 00:28:34.160 [INFO][5253] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.71.137/26] IPv6=[] ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" HandleID="k8s-pod-network.5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--ctlg8-eth0" Oct 13 00:28:34.190481 containerd[1881]: 2025-10-13 00:28:34.164 [INFO][5241] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" Namespace="calico-apiserver" Pod="calico-apiserver-6d4d7db98f-ctlg8" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--ctlg8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--ctlg8-eth0", GenerateName:"calico-apiserver-6d4d7db98f-", Namespace:"calico-apiserver", SelfLink:"", UID:"d5df1242-3535-480f-83ed-f48fbf6f9e8f", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 27, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4d7db98f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-27183f81a1", ContainerID:"", Pod:"calico-apiserver-6d4d7db98f-ctlg8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib5bffcc03ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:28:34.190481 containerd[1881]: 2025-10-13 00:28:34.164 [INFO][5241] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.137/32] ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" Namespace="calico-apiserver" Pod="calico-apiserver-6d4d7db98f-ctlg8" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--ctlg8-eth0" Oct 13 00:28:34.190481 containerd[1881]: 2025-10-13 00:28:34.164 [INFO][5241] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib5bffcc03ab ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" Namespace="calico-apiserver" Pod="calico-apiserver-6d4d7db98f-ctlg8" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--ctlg8-eth0" Oct 13 00:28:34.190481 containerd[1881]: 2025-10-13 00:28:34.168 [INFO][5241] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" Namespace="calico-apiserver" Pod="calico-apiserver-6d4d7db98f-ctlg8" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--ctlg8-eth0" Oct 13 00:28:34.190481 containerd[1881]: 2025-10-13 00:28:34.169 [INFO][5241] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" Namespace="calico-apiserver" Pod="calico-apiserver-6d4d7db98f-ctlg8" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--ctlg8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--ctlg8-eth0", GenerateName:"calico-apiserver-6d4d7db98f-", Namespace:"calico-apiserver", SelfLink:"", UID:"d5df1242-3535-480f-83ed-f48fbf6f9e8f", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 27, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d4d7db98f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-27183f81a1", ContainerID:"5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8", Pod:"calico-apiserver-6d4d7db98f-ctlg8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib5bffcc03ab", MAC:"f2:80:9e:24:c1:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:28:34.190481 containerd[1881]: 2025-10-13 00:28:34.185 [INFO][5241] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" Namespace="calico-apiserver" Pod="calico-apiserver-6d4d7db98f-ctlg8" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--ctlg8-eth0" Oct 13 00:28:34.472409 kubelet[3373]: I1013 00:28:34.471611 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mx5qd" podStartSLOduration=62.471593987 podStartE2EDuration="1m2.471593987s" podCreationTimestamp="2025-10-13 00:27:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:28:34.471364363 +0000 UTC m=+66.532091721" watchObservedRunningTime="2025-10-13 00:28:34.471593987 +0000 UTC m=+66.532321321" Oct 13 00:28:34.594459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2647286152.mount: Deactivated successfully. Oct 13 00:28:35.038270 systemd-networkd[1476]: cali223f2308102: Gained IPv6LL Oct 13 00:28:35.038531 systemd-networkd[1476]: cali2e82ea46fd6: Gained IPv6LL Oct 13 00:28:35.294072 systemd-networkd[1476]: calib5bffcc03ab: Gained IPv6LL Oct 13 00:28:35.550093 systemd-networkd[1476]: calif9611cc5089: Gained IPv6LL Oct 13 00:28:40.441286 containerd[1881]: time="2025-10-13T00:28:40.441185338Z" level=info msg="connecting to shim 29929191cff5b852d575f16e7248558502e1f1fabe0a56e6c076fc8cb321786c" address="unix:///run/containerd/s/0a8e0cce2d3c2e9d0d3cec9e75788c5af81cde85dd9e2196abfaebbccf8ce4bb" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:28:40.459077 systemd[1]: Started cri-containerd-29929191cff5b852d575f16e7248558502e1f1fabe0a56e6c076fc8cb321786c.scope - libcontainer container 29929191cff5b852d575f16e7248558502e1f1fabe0a56e6c076fc8cb321786c. Oct 13 00:28:40.549307 containerd[1881]: time="2025-10-13T00:28:40.549273235Z" level=info msg="connecting to shim 55d47cc0e90ca1ec2816dd53222cb245ad261837e055679febfd933d84d67d2e" address="unix:///run/containerd/s/d0ccfc26c774ec1f7daabd5339ba67f4571a675347e12cba80bbeb8cd909cce6" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:28:40.576111 systemd[1]: Started cri-containerd-55d47cc0e90ca1ec2816dd53222cb245ad261837e055679febfd933d84d67d2e.scope - libcontainer container 55d47cc0e90ca1ec2816dd53222cb245ad261837e055679febfd933d84d67d2e. Oct 13 00:28:40.650074 containerd[1881]: time="2025-10-13T00:28:40.649924605Z" level=info msg="connecting to shim 5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" address="unix:///run/containerd/s/81f0d3ea7a3296df4497c2d4e78ad336a4591470f1fe1de2526a1758a00d1924" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:28:40.672081 systemd[1]: Started cri-containerd-5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8.scope - libcontainer container 5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8. Oct 13 00:28:40.694441 containerd[1881]: time="2025-10-13T00:28:40.694349513Z" level=info msg="connecting to shim b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" address="unix:///run/containerd/s/e138cb61a5ef80a08939311b4c2b8993a9ef89674f7e308cbf2d25cf2c4e6999" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:28:40.718090 systemd[1]: Started cri-containerd-b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1.scope - libcontainer container b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1. Oct 13 00:28:40.733934 containerd[1881]: time="2025-10-13T00:28:40.733899308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-745499777d-j7f7t,Uid:6541987e-4c6e-48c0-877c-85d5d35cabc1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"29929191cff5b852d575f16e7248558502e1f1fabe0a56e6c076fc8cb321786c\"" Oct 13 00:28:40.830434 containerd[1881]: time="2025-10-13T00:28:40.830391267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lknh,Uid:6c8898f3-e8e8-4ac1-bc62-12aa1248ba56,Namespace:kube-system,Attempt:0,} returns sandbox id \"55d47cc0e90ca1ec2816dd53222cb245ad261837e055679febfd933d84d67d2e\"" Oct 13 00:28:40.835412 containerd[1881]: time="2025-10-13T00:28:40.835376089Z" level=info msg="CreateContainer within sandbox \"55d47cc0e90ca1ec2816dd53222cb245ad261837e055679febfd933d84d67d2e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 00:28:40.895341 containerd[1881]: time="2025-10-13T00:28:40.895305641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4d7db98f-ctlg8,Uid:d5df1242-3535-480f-83ed-f48fbf6f9e8f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8\"" Oct 13 00:28:43.035016 containerd[1881]: time="2025-10-13T00:28:43.034972651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d4d7db98f-gx5fl,Uid:dd6494fc-6077-4e51-941e-0d0f0b6a8344,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1\"" Oct 13 00:28:48.078240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2511079520.mount: Deactivated successfully. Oct 13 00:28:48.082026 containerd[1881]: time="2025-10-13T00:28:48.079844883Z" level=info msg="Container f2ba4684da51cda7755c8ab1b6089bad960b60d37b951e8779cfd04976d57138: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:28:48.111514 containerd[1881]: time="2025-10-13T00:28:48.111472977Z" level=info msg="CreateContainer within sandbox \"55d47cc0e90ca1ec2816dd53222cb245ad261837e055679febfd933d84d67d2e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f2ba4684da51cda7755c8ab1b6089bad960b60d37b951e8779cfd04976d57138\"" Oct 13 00:28:48.113891 containerd[1881]: time="2025-10-13T00:28:48.112113782Z" level=info msg="StartContainer for \"f2ba4684da51cda7755c8ab1b6089bad960b60d37b951e8779cfd04976d57138\"" Oct 13 00:28:48.113891 containerd[1881]: time="2025-10-13T00:28:48.112721386Z" level=info msg="connecting to shim f2ba4684da51cda7755c8ab1b6089bad960b60d37b951e8779cfd04976d57138" address="unix:///run/containerd/s/d0ccfc26c774ec1f7daabd5339ba67f4571a675347e12cba80bbeb8cd909cce6" protocol=ttrpc version=3 Oct 13 00:28:48.120903 containerd[1881]: time="2025-10-13T00:28:48.120866557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:28:48.125689 containerd[1881]: time="2025-10-13T00:28:48.125652954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Oct 13 00:28:48.143083 systemd[1]: Started cri-containerd-f2ba4684da51cda7755c8ab1b6089bad960b60d37b951e8779cfd04976d57138.scope - libcontainer container f2ba4684da51cda7755c8ab1b6089bad960b60d37b951e8779cfd04976d57138. Oct 13 00:28:48.236887 containerd[1881]: time="2025-10-13T00:28:48.236760904Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:28:48.288091 containerd[1881]: time="2025-10-13T00:28:48.288043339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:28:48.288997 containerd[1881]: time="2025-10-13T00:28:48.288877015Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 21.010266984s" Oct 13 00:28:48.288997 containerd[1881]: time="2025-10-13T00:28:48.288904080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Oct 13 00:28:48.290741 containerd[1881]: time="2025-10-13T00:28:48.289851935Z" level=info msg="StartContainer for \"f2ba4684da51cda7755c8ab1b6089bad960b60d37b951e8779cfd04976d57138\" returns successfully" Oct 13 00:28:48.292622 containerd[1881]: time="2025-10-13T00:28:48.292598177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Oct 13 00:28:48.293962 containerd[1881]: time="2025-10-13T00:28:48.293913044Z" level=info msg="CreateContainer within sandbox \"daf18e1aedfcc43ec85203315db319e174c78feb36ef65ccb7ced96d962f5772\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Oct 13 00:28:48.312337 containerd[1881]: time="2025-10-13T00:28:48.312086664Z" level=info msg="Container 94dea1e49efd0cf7b8dae40750c3818f20cc3b3ee85c4394161192dcf3877c68: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:28:48.330514 containerd[1881]: time="2025-10-13T00:28:48.330390673Z" level=info msg="CreateContainer within sandbox \"daf18e1aedfcc43ec85203315db319e174c78feb36ef65ccb7ced96d962f5772\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"94dea1e49efd0cf7b8dae40750c3818f20cc3b3ee85c4394161192dcf3877c68\"" Oct 13 00:28:48.331330 containerd[1881]: time="2025-10-13T00:28:48.331105441Z" level=info msg="StartContainer for \"94dea1e49efd0cf7b8dae40750c3818f20cc3b3ee85c4394161192dcf3877c68\"" Oct 13 00:28:48.332271 containerd[1881]: time="2025-10-13T00:28:48.332231998Z" level=info msg="connecting to shim 94dea1e49efd0cf7b8dae40750c3818f20cc3b3ee85c4394161192dcf3877c68" address="unix:///run/containerd/s/9f78f4c315eacf7fe15e183297c2e06d0edf0d16d82dd15f2390a4798026f7bd" protocol=ttrpc version=3 Oct 13 00:28:48.348078 systemd[1]: Started cri-containerd-94dea1e49efd0cf7b8dae40750c3818f20cc3b3ee85c4394161192dcf3877c68.scope - libcontainer container 94dea1e49efd0cf7b8dae40750c3818f20cc3b3ee85c4394161192dcf3877c68. Oct 13 00:28:48.391651 containerd[1881]: time="2025-10-13T00:28:48.391571393Z" level=info msg="StartContainer for \"94dea1e49efd0cf7b8dae40750c3818f20cc3b3ee85c4394161192dcf3877c68\" returns successfully" Oct 13 00:28:48.520735 kubelet[3373]: I1013 00:28:48.520533 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7lknh" podStartSLOduration=76.5205138 podStartE2EDuration="1m16.5205138s" podCreationTimestamp="2025-10-13 00:27:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:28:48.503656727 +0000 UTC m=+80.564384125" watchObservedRunningTime="2025-10-13 00:28:48.5205138 +0000 UTC m=+80.581241126" Oct 13 00:28:48.522044 kubelet[3373]: I1013 00:28:48.522003 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-j7cfn" podStartSLOduration=34.544497866 podStartE2EDuration="56.521990665s" podCreationTimestamp="2025-10-13 00:27:52 +0000 UTC" firstStartedPulling="2025-10-13 00:28:26.314911548 +0000 UTC m=+58.375638882" lastFinishedPulling="2025-10-13 00:28:48.292404355 +0000 UTC m=+80.353131681" observedRunningTime="2025-10-13 00:28:48.520226399 +0000 UTC m=+80.580953725" watchObservedRunningTime="2025-10-13 00:28:48.521990665 +0000 UTC m=+80.582717991" Oct 13 00:28:48.583327 containerd[1881]: time="2025-10-13T00:28:48.582791180Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94dea1e49efd0cf7b8dae40750c3818f20cc3b3ee85c4394161192dcf3877c68\" id:\"7ac19d2b9e30375ccb34f342190f0417133e03e07cef04955c4197a72bc24397\" pid:5566 exit_status:1 exited_at:{seconds:1760315328 nanos:582309157}" Oct 13 00:28:49.538952 containerd[1881]: time="2025-10-13T00:28:49.538873324Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94dea1e49efd0cf7b8dae40750c3818f20cc3b3ee85c4394161192dcf3877c68\" id:\"fe71f9279ea1346066307a15c151f3e726b028bed118b093b11d782ff795fff2\" pid:5597 exit_status:1 exited_at:{seconds:1760315329 nanos:538337347}" Oct 13 00:28:49.979053 containerd[1881]: time="2025-10-13T00:28:49.978866716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:28:49.982017 containerd[1881]: time="2025-10-13T00:28:49.981983090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Oct 13 00:28:49.985054 containerd[1881]: time="2025-10-13T00:28:49.985010405Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:28:49.989075 containerd[1881]: time="2025-10-13T00:28:49.989027361Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:28:49.989581 containerd[1881]: time="2025-10-13T00:28:49.989303922Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.696545948s" Oct 13 00:28:49.989581 containerd[1881]: time="2025-10-13T00:28:49.989330619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Oct 13 00:28:49.991352 containerd[1881]: time="2025-10-13T00:28:49.990979889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Oct 13 00:28:49.991962 containerd[1881]: time="2025-10-13T00:28:49.991845261Z" level=info msg="CreateContainer within sandbox \"0d980e1c99d405f1c164192a5a588ae90cbe01b4004ef6601252fc948b0540da\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 13 00:28:50.012802 containerd[1881]: time="2025-10-13T00:28:50.012772468Z" level=info msg="Container 3e4df96027a2e21cc441758c57dc3747bedf884aa8b6bce11dc1e4eb2c5c0535: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:28:50.018320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount681938776.mount: Deactivated successfully. Oct 13 00:28:50.032230 containerd[1881]: time="2025-10-13T00:28:50.032193874Z" level=info msg="CreateContainer within sandbox \"0d980e1c99d405f1c164192a5a588ae90cbe01b4004ef6601252fc948b0540da\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3e4df96027a2e21cc441758c57dc3747bedf884aa8b6bce11dc1e4eb2c5c0535\"" Oct 13 00:28:50.033164 containerd[1881]: time="2025-10-13T00:28:50.032931442Z" level=info msg="StartContainer for \"3e4df96027a2e21cc441758c57dc3747bedf884aa8b6bce11dc1e4eb2c5c0535\"" Oct 13 00:28:50.034358 containerd[1881]: time="2025-10-13T00:28:50.034334320Z" level=info msg="connecting to shim 3e4df96027a2e21cc441758c57dc3747bedf884aa8b6bce11dc1e4eb2c5c0535" address="unix:///run/containerd/s/ee2a7f9f4fd7470910858da1558a7822cc8224558736930afced32db1b4e29f9" protocol=ttrpc version=3 Oct 13 00:28:50.054086 systemd[1]: Started cri-containerd-3e4df96027a2e21cc441758c57dc3747bedf884aa8b6bce11dc1e4eb2c5c0535.scope - libcontainer container 3e4df96027a2e21cc441758c57dc3747bedf884aa8b6bce11dc1e4eb2c5c0535. Oct 13 00:28:50.084842 containerd[1881]: time="2025-10-13T00:28:50.084788128Z" level=info msg="StartContainer for \"3e4df96027a2e21cc441758c57dc3747bedf884aa8b6bce11dc1e4eb2c5c0535\" returns successfully" Oct 13 00:28:50.535771 containerd[1881]: time="2025-10-13T00:28:50.535730558Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94dea1e49efd0cf7b8dae40750c3818f20cc3b3ee85c4394161192dcf3877c68\" id:\"ebd487f33505be161162f36b4d49d1778b8c936758207050bf26a1572134d988\" pid:5651 exit_status:1 exited_at:{seconds:1760315330 nanos:535283919}" Oct 13 00:28:52.870725 containerd[1881]: time="2025-10-13T00:28:52.870658183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:28:52.873953 containerd[1881]: time="2025-10-13T00:28:52.873902977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Oct 13 00:28:52.877631 containerd[1881]: time="2025-10-13T00:28:52.877590105Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:28:52.883668 containerd[1881]: time="2025-10-13T00:28:52.883374678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:28:52.885088 containerd[1881]: time="2025-10-13T00:28:52.884923049Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 2.893489016s" Oct 13 00:28:52.885088 containerd[1881]: time="2025-10-13T00:28:52.884970442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Oct 13 00:28:52.886210 containerd[1881]: time="2025-10-13T00:28:52.886047709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 00:28:52.906446 containerd[1881]: time="2025-10-13T00:28:52.906403622Z" level=info msg="CreateContainer within sandbox \"f6aa78c80cd67efb144f7bf24ea793e451365cb6d28a91c7951a6bc77890f8a5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 13 00:28:52.928673 containerd[1881]: time="2025-10-13T00:28:52.928098411Z" level=info msg="Container 45c77385ae7eb8c27b4cfa56257be96084e6585d69ac249acbe58030584003f6: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:28:52.944871 containerd[1881]: time="2025-10-13T00:28:52.944832189Z" level=info msg="CreateContainer within sandbox \"f6aa78c80cd67efb144f7bf24ea793e451365cb6d28a91c7951a6bc77890f8a5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"45c77385ae7eb8c27b4cfa56257be96084e6585d69ac249acbe58030584003f6\"" Oct 13 00:28:52.946247 containerd[1881]: time="2025-10-13T00:28:52.946219994Z" level=info msg="StartContainer for \"45c77385ae7eb8c27b4cfa56257be96084e6585d69ac249acbe58030584003f6\"" Oct 13 00:28:52.948081 containerd[1881]: time="2025-10-13T00:28:52.948053550Z" level=info msg="connecting to shim 45c77385ae7eb8c27b4cfa56257be96084e6585d69ac249acbe58030584003f6" address="unix:///run/containerd/s/f0a1f98cb9d0b7ee262b4fbedecab84fdfdb6bd3bf3659000c417d0325693120" protocol=ttrpc version=3 Oct 13 00:28:52.967071 systemd[1]: Started cri-containerd-45c77385ae7eb8c27b4cfa56257be96084e6585d69ac249acbe58030584003f6.scope - libcontainer container 45c77385ae7eb8c27b4cfa56257be96084e6585d69ac249acbe58030584003f6. Oct 13 00:28:53.001173 containerd[1881]: time="2025-10-13T00:28:53.001120539Z" level=info msg="StartContainer for \"45c77385ae7eb8c27b4cfa56257be96084e6585d69ac249acbe58030584003f6\" returns successfully" Oct 13 00:28:53.356181 containerd[1881]: time="2025-10-13T00:28:53.356110179Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5cce6d6ad6e3b689c9464d26ed717e739885e9507e1b6e83ef1827561f9f51df\" id:\"d45a551039e76dfca8fd7591809fe64dcc9b87d344296689391792e60e6e563a\" pid:5719 exited_at:{seconds:1760315333 nanos:354687205}" Oct 13 00:28:53.562365 containerd[1881]: time="2025-10-13T00:28:53.562324225Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45c77385ae7eb8c27b4cfa56257be96084e6585d69ac249acbe58030584003f6\" id:\"626a98b6f1520048cf4430639ffdbe06058995dc0e691d9070e9a2affff8b7c3\" pid:5747 exited_at:{seconds:1760315333 nanos:561076441}" Oct 13 00:28:53.582345 kubelet[3373]: I1013 00:28:53.582290 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-844f59d67c-9hbnj" podStartSLOduration=36.383170769 podStartE2EDuration="1m0.582276549s" podCreationTimestamp="2025-10-13 00:27:53 +0000 UTC" firstStartedPulling="2025-10-13 00:28:28.686723818 +0000 UTC m=+60.747451144" lastFinishedPulling="2025-10-13 00:28:52.885829598 +0000 UTC m=+84.946556924" observedRunningTime="2025-10-13 00:28:53.510964204 +0000 UTC m=+85.571691594" watchObservedRunningTime="2025-10-13 00:28:53.582276549 +0000 UTC m=+85.643003875" Oct 13 00:28:56.715676 containerd[1881]: time="2025-10-13T00:28:56.715159469Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:28:56.719217 containerd[1881]: time="2025-10-13T00:28:56.719188529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Oct 13 00:28:56.723738 containerd[1881]: time="2025-10-13T00:28:56.723710780Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:28:56.726400 containerd[1881]: time="2025-10-13T00:28:56.726363395Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:28:56.727324 containerd[1881]: time="2025-10-13T00:28:56.727215295Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 3.841141521s" Oct 13 00:28:56.727324 containerd[1881]: time="2025-10-13T00:28:56.727241544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Oct 13 00:28:56.730808 containerd[1881]: time="2025-10-13T00:28:56.730773947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 00:28:56.732351 containerd[1881]: time="2025-10-13T00:28:56.732141832Z" level=info msg="CreateContainer within sandbox \"29929191cff5b852d575f16e7248558502e1f1fabe0a56e6c076fc8cb321786c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 00:28:56.754960 containerd[1881]: time="2025-10-13T00:28:56.753224984Z" level=info msg="Container cc6de53e53374086b9a3dae52e6eee1d25c8eefa93fd10df7ba640c3395330d1: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:28:56.774265 containerd[1881]: time="2025-10-13T00:28:56.774228198Z" level=info msg="CreateContainer within sandbox \"29929191cff5b852d575f16e7248558502e1f1fabe0a56e6c076fc8cb321786c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cc6de53e53374086b9a3dae52e6eee1d25c8eefa93fd10df7ba640c3395330d1\"" Oct 13 00:28:56.774928 containerd[1881]: time="2025-10-13T00:28:56.774884163Z" level=info msg="StartContainer for \"cc6de53e53374086b9a3dae52e6eee1d25c8eefa93fd10df7ba640c3395330d1\"" Oct 13 00:28:56.776443 containerd[1881]: time="2025-10-13T00:28:56.776417230Z" level=info msg="connecting to shim cc6de53e53374086b9a3dae52e6eee1d25c8eefa93fd10df7ba640c3395330d1" address="unix:///run/containerd/s/0a8e0cce2d3c2e9d0d3cec9e75788c5af81cde85dd9e2196abfaebbccf8ce4bb" protocol=ttrpc version=3 Oct 13 00:28:56.820225 systemd[1]: Started cri-containerd-cc6de53e53374086b9a3dae52e6eee1d25c8eefa93fd10df7ba640c3395330d1.scope - libcontainer container cc6de53e53374086b9a3dae52e6eee1d25c8eefa93fd10df7ba640c3395330d1. Oct 13 00:28:56.873817 containerd[1881]: time="2025-10-13T00:28:56.873770537Z" level=info msg="StartContainer for \"cc6de53e53374086b9a3dae52e6eee1d25c8eefa93fd10df7ba640c3395330d1\" returns successfully" Oct 13 00:28:57.078112 containerd[1881]: time="2025-10-13T00:28:57.078060360Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:28:57.081267 containerd[1881]: time="2025-10-13T00:28:57.081067114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Oct 13 00:28:57.082925 containerd[1881]: time="2025-10-13T00:28:57.082738160Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 351.94146ms" Oct 13 00:28:57.083076 containerd[1881]: time="2025-10-13T00:28:57.083016682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Oct 13 00:28:57.085445 containerd[1881]: time="2025-10-13T00:28:57.085080725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 00:28:57.087118 containerd[1881]: time="2025-10-13T00:28:57.087091447Z" level=info msg="CreateContainer within sandbox \"5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 00:28:57.111683 containerd[1881]: time="2025-10-13T00:28:57.111653977Z" level=info msg="Container 9327f0afa3cac0b285854593819691aec1cf73b4953882e61102539c17b3608c: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:28:57.111978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2147808340.mount: Deactivated successfully. Oct 13 00:28:57.130775 containerd[1881]: time="2025-10-13T00:28:57.130736432Z" level=info msg="CreateContainer within sandbox \"5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9327f0afa3cac0b285854593819691aec1cf73b4953882e61102539c17b3608c\"" Oct 13 00:28:57.132490 containerd[1881]: time="2025-10-13T00:28:57.132461728Z" level=info msg="StartContainer for \"9327f0afa3cac0b285854593819691aec1cf73b4953882e61102539c17b3608c\"" Oct 13 00:28:57.133898 containerd[1881]: time="2025-10-13T00:28:57.133875102Z" level=info msg="connecting to shim 9327f0afa3cac0b285854593819691aec1cf73b4953882e61102539c17b3608c" address="unix:///run/containerd/s/81f0d3ea7a3296df4497c2d4e78ad336a4591470f1fe1de2526a1758a00d1924" protocol=ttrpc version=3 Oct 13 00:28:57.163078 systemd[1]: Started cri-containerd-9327f0afa3cac0b285854593819691aec1cf73b4953882e61102539c17b3608c.scope - libcontainer container 9327f0afa3cac0b285854593819691aec1cf73b4953882e61102539c17b3608c. Oct 13 00:28:57.219091 containerd[1881]: time="2025-10-13T00:28:57.219052444Z" level=info msg="StartContainer for \"9327f0afa3cac0b285854593819691aec1cf73b4953882e61102539c17b3608c\" returns successfully" Oct 13 00:28:57.417023 containerd[1881]: time="2025-10-13T00:28:57.415231290Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:28:57.417978 containerd[1881]: time="2025-10-13T00:28:57.417952683Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Oct 13 00:28:57.419308 containerd[1881]: time="2025-10-13T00:28:57.419275310Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 333.750851ms" Oct 13 00:28:57.419308 containerd[1881]: time="2025-10-13T00:28:57.419306879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Oct 13 00:28:57.420902 containerd[1881]: time="2025-10-13T00:28:57.420862010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Oct 13 00:28:57.421621 containerd[1881]: time="2025-10-13T00:28:57.421596786Z" level=info msg="CreateContainer within sandbox \"b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 00:28:57.444188 containerd[1881]: time="2025-10-13T00:28:57.444159459Z" level=info msg="Container d38e07152cd1025ef60dd62f187e1ab3ce435e0d19a941423aa92b6a22006605: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:28:57.467126 containerd[1881]: time="2025-10-13T00:28:57.467064615Z" level=info msg="CreateContainer within sandbox \"b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d38e07152cd1025ef60dd62f187e1ab3ce435e0d19a941423aa92b6a22006605\"" Oct 13 00:28:57.468530 containerd[1881]: time="2025-10-13T00:28:57.468496005Z" level=info msg="StartContainer for \"d38e07152cd1025ef60dd62f187e1ab3ce435e0d19a941423aa92b6a22006605\"" Oct 13 00:28:57.470287 containerd[1881]: time="2025-10-13T00:28:57.470257391Z" level=info msg="connecting to shim d38e07152cd1025ef60dd62f187e1ab3ce435e0d19a941423aa92b6a22006605" address="unix:///run/containerd/s/e138cb61a5ef80a08939311b4c2b8993a9ef89674f7e308cbf2d25cf2c4e6999" protocol=ttrpc version=3 Oct 13 00:28:57.497102 systemd[1]: Started cri-containerd-d38e07152cd1025ef60dd62f187e1ab3ce435e0d19a941423aa92b6a22006605.scope - libcontainer container d38e07152cd1025ef60dd62f187e1ab3ce435e0d19a941423aa92b6a22006605. Oct 13 00:28:57.562025 kubelet[3373]: I1013 00:28:57.560905 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d4d7db98f-ctlg8" podStartSLOduration=53.376011061 podStartE2EDuration="1m9.56087831s" podCreationTimestamp="2025-10-13 00:27:48 +0000 UTC" firstStartedPulling="2025-10-13 00:28:40.899431002 +0000 UTC m=+72.960158336" lastFinishedPulling="2025-10-13 00:28:57.084298259 +0000 UTC m=+89.145025585" observedRunningTime="2025-10-13 00:28:57.560627654 +0000 UTC m=+89.621354980" watchObservedRunningTime="2025-10-13 00:28:57.56087831 +0000 UTC m=+89.621605636" Oct 13 00:28:57.562025 kubelet[3373]: I1013 00:28:57.561026 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-745499777d-j7f7t" podStartSLOduration=52.566149447 podStartE2EDuration="1m8.561014507s" podCreationTimestamp="2025-10-13 00:27:49 +0000 UTC" firstStartedPulling="2025-10-13 00:28:40.735636926 +0000 UTC m=+72.796364252" lastFinishedPulling="2025-10-13 00:28:56.730501986 +0000 UTC m=+88.791229312" observedRunningTime="2025-10-13 00:28:57.535514474 +0000 UTC m=+89.596241800" watchObservedRunningTime="2025-10-13 00:28:57.561014507 +0000 UTC m=+89.621741833" Oct 13 00:28:57.579932 containerd[1881]: time="2025-10-13T00:28:57.579410747Z" level=info msg="StartContainer for \"d38e07152cd1025ef60dd62f187e1ab3ce435e0d19a941423aa92b6a22006605\" returns successfully" Oct 13 00:28:58.557781 kubelet[3373]: I1013 00:28:58.557679 3373 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 00:28:58.559204 kubelet[3373]: I1013 00:28:58.558186 3373 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 00:28:58.587808 kubelet[3373]: I1013 00:28:58.587755 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d4d7db98f-gx5fl" podStartSLOduration=56.203866797 podStartE2EDuration="1m10.587739954s" podCreationTimestamp="2025-10-13 00:27:48 +0000 UTC" firstStartedPulling="2025-10-13 00:28:43.036285302 +0000 UTC m=+75.097012628" lastFinishedPulling="2025-10-13 00:28:57.420158459 +0000 UTC m=+89.480885785" observedRunningTime="2025-10-13 00:28:58.587010362 +0000 UTC m=+90.647737688" watchObservedRunningTime="2025-10-13 00:28:58.587739954 +0000 UTC m=+90.648467288" Oct 13 00:29:01.759210 containerd[1881]: time="2025-10-13T00:29:01.759151592Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:29:01.762109 containerd[1881]: time="2025-10-13T00:29:01.762042871Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Oct 13 00:29:01.765044 containerd[1881]: time="2025-10-13T00:29:01.764998743Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:29:01.769609 containerd[1881]: time="2025-10-13T00:29:01.769567029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 00:29:01.770187 containerd[1881]: time="2025-10-13T00:29:01.769840157Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 4.348951746s" Oct 13 00:29:01.770187 containerd[1881]: time="2025-10-13T00:29:01.769867894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Oct 13 00:29:01.777182 containerd[1881]: time="2025-10-13T00:29:01.776634451Z" level=info msg="CreateContainer within sandbox \"0d980e1c99d405f1c164192a5a588ae90cbe01b4004ef6601252fc948b0540da\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 13 00:29:01.795895 containerd[1881]: time="2025-10-13T00:29:01.795183953Z" level=info msg="Container c98aee776dd184e460b8c08b864224cfe63ee20ca5e4bbd90af30d5d4ed53baf: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:29:01.798983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1144094616.mount: Deactivated successfully. Oct 13 00:29:01.815286 containerd[1881]: time="2025-10-13T00:29:01.815164166Z" level=info msg="CreateContainer within sandbox \"0d980e1c99d405f1c164192a5a588ae90cbe01b4004ef6601252fc948b0540da\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c98aee776dd184e460b8c08b864224cfe63ee20ca5e4bbd90af30d5d4ed53baf\"" Oct 13 00:29:01.817169 containerd[1881]: time="2025-10-13T00:29:01.817091421Z" level=info msg="StartContainer for \"c98aee776dd184e460b8c08b864224cfe63ee20ca5e4bbd90af30d5d4ed53baf\"" Oct 13 00:29:01.818736 containerd[1881]: time="2025-10-13T00:29:01.818713402Z" level=info msg="connecting to shim c98aee776dd184e460b8c08b864224cfe63ee20ca5e4bbd90af30d5d4ed53baf" address="unix:///run/containerd/s/ee2a7f9f4fd7470910858da1558a7822cc8224558736930afced32db1b4e29f9" protocol=ttrpc version=3 Oct 13 00:29:01.840101 systemd[1]: Started cri-containerd-c98aee776dd184e460b8c08b864224cfe63ee20ca5e4bbd90af30d5d4ed53baf.scope - libcontainer container c98aee776dd184e460b8c08b864224cfe63ee20ca5e4bbd90af30d5d4ed53baf. Oct 13 00:29:01.881174 containerd[1881]: time="2025-10-13T00:29:01.881131249Z" level=info msg="StartContainer for \"c98aee776dd184e460b8c08b864224cfe63ee20ca5e4bbd90af30d5d4ed53baf\" returns successfully" Oct 13 00:29:02.180177 kubelet[3373]: I1013 00:29:02.180120 3373 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 13 00:29:02.180177 kubelet[3373]: I1013 00:29:02.180180 3373 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 13 00:29:02.589722 kubelet[3373]: I1013 00:29:02.589663 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-b5nsr" podStartSLOduration=36.290444471 podStartE2EDuration="1m9.589649858s" podCreationTimestamp="2025-10-13 00:27:53 +0000 UTC" firstStartedPulling="2025-10-13 00:28:28.474842068 +0000 UTC m=+60.535569394" lastFinishedPulling="2025-10-13 00:29:01.774047455 +0000 UTC m=+93.834774781" observedRunningTime="2025-10-13 00:29:02.588930618 +0000 UTC m=+94.649657944" watchObservedRunningTime="2025-10-13 00:29:02.589649858 +0000 UTC m=+94.650377192" Oct 13 00:29:03.671254 kubelet[3373]: I1013 00:29:03.671215 3373 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 00:29:03.760291 containerd[1881]: time="2025-10-13T00:29:03.760244497Z" level=info msg="StopContainer for \"d38e07152cd1025ef60dd62f187e1ab3ce435e0d19a941423aa92b6a22006605\" with timeout 30 (s)" Oct 13 00:29:03.761500 containerd[1881]: time="2025-10-13T00:29:03.761091461Z" level=info msg="Stop container \"d38e07152cd1025ef60dd62f187e1ab3ce435e0d19a941423aa92b6a22006605\" with signal terminated" Oct 13 00:29:03.795103 systemd[1]: cri-containerd-d38e07152cd1025ef60dd62f187e1ab3ce435e0d19a941423aa92b6a22006605.scope: Deactivated successfully. Oct 13 00:29:03.795359 systemd[1]: cri-containerd-d38e07152cd1025ef60dd62f187e1ab3ce435e0d19a941423aa92b6a22006605.scope: Consumed 1.092s CPU time, 42.8M memory peak. Oct 13 00:29:03.799730 containerd[1881]: time="2025-10-13T00:29:03.799661969Z" level=info msg="received exit event container_id:\"d38e07152cd1025ef60dd62f187e1ab3ce435e0d19a941423aa92b6a22006605\" id:\"d38e07152cd1025ef60dd62f187e1ab3ce435e0d19a941423aa92b6a22006605\" pid:5850 exit_status:1 exited_at:{seconds:1760315343 nanos:799334398}" Oct 13 00:29:03.800021 containerd[1881]: time="2025-10-13T00:29:03.799998500Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d38e07152cd1025ef60dd62f187e1ab3ce435e0d19a941423aa92b6a22006605\" id:\"d38e07152cd1025ef60dd62f187e1ab3ce435e0d19a941423aa92b6a22006605\" pid:5850 exit_status:1 exited_at:{seconds:1760315343 nanos:799334398}" Oct 13 00:29:03.825865 systemd[1]: Created slice kubepods-besteffort-pode546cdde_9bd6_470d_b5d1_7c0b51565f0d.slice - libcontainer container kubepods-besteffort-pode546cdde_9bd6_470d_b5d1_7c0b51565f0d.slice. Oct 13 00:29:03.834363 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d38e07152cd1025ef60dd62f187e1ab3ce435e0d19a941423aa92b6a22006605-rootfs.mount: Deactivated successfully. Oct 13 00:29:03.982641 kubelet[3373]: I1013 00:29:03.964293 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e546cdde-9bd6-470d-b5d1-7c0b51565f0d-calico-apiserver-certs\") pod \"calico-apiserver-745499777d-8zkrh\" (UID: \"e546cdde-9bd6-470d-b5d1-7c0b51565f0d\") " pod="calico-apiserver/calico-apiserver-745499777d-8zkrh" Oct 13 00:29:03.982641 kubelet[3373]: I1013 00:29:03.964351 3373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8bmh\" (UniqueName: \"kubernetes.io/projected/e546cdde-9bd6-470d-b5d1-7c0b51565f0d-kube-api-access-v8bmh\") pod \"calico-apiserver-745499777d-8zkrh\" (UID: \"e546cdde-9bd6-470d-b5d1-7c0b51565f0d\") " pod="calico-apiserver/calico-apiserver-745499777d-8zkrh" Oct 13 00:29:04.579922 containerd[1881]: time="2025-10-13T00:29:04.579858191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-745499777d-8zkrh,Uid:e546cdde-9bd6-470d-b5d1-7c0b51565f0d,Namespace:calico-apiserver,Attempt:0,}" Oct 13 00:29:05.051735 containerd[1881]: time="2025-10-13T00:29:05.051590281Z" level=info msg="StopContainer for \"d38e07152cd1025ef60dd62f187e1ab3ce435e0d19a941423aa92b6a22006605\" returns successfully" Oct 13 00:29:05.053294 containerd[1881]: time="2025-10-13T00:29:05.053270160Z" level=info msg="StopPodSandbox for \"b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1\"" Oct 13 00:29:05.061763 containerd[1881]: time="2025-10-13T00:29:05.061651281Z" level=info msg="Container to stop \"d38e07152cd1025ef60dd62f187e1ab3ce435e0d19a941423aa92b6a22006605\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 00:29:05.072277 systemd[1]: cri-containerd-b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1.scope: Deactivated successfully. Oct 13 00:29:05.088888 containerd[1881]: time="2025-10-13T00:29:05.088797048Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1\" id:\"b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1\" pid:5459 exit_status:137 exited_at:{seconds:1760315345 nanos:88495534}" Oct 13 00:29:05.109129 systemd-networkd[1476]: calib51c7a45b66: Link UP Oct 13 00:29:05.109877 systemd-networkd[1476]: calib51c7a45b66: Gained carrier Oct 13 00:29:05.133980 containerd[1881]: 2025-10-13 00:29:05.031 [INFO][5959] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--745499777d--8zkrh-eth0 calico-apiserver-745499777d- calico-apiserver e546cdde-9bd6-470d-b5d1-7c0b51565f0d 1179 0 2025-10-13 00:29:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:745499777d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.1.0-a-27183f81a1 calico-apiserver-745499777d-8zkrh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib51c7a45b66 [] [] }} ContainerID="58aecb4abb6fa01b9986d144114362a9004a50dcbf7079fb041fccf43e08d57e" Namespace="calico-apiserver" Pod="calico-apiserver-745499777d-8zkrh" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--745499777d--8zkrh-" Oct 13 00:29:05.133980 containerd[1881]: 2025-10-13 00:29:05.031 [INFO][5959] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="58aecb4abb6fa01b9986d144114362a9004a50dcbf7079fb041fccf43e08d57e" Namespace="calico-apiserver" Pod="calico-apiserver-745499777d-8zkrh" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--745499777d--8zkrh-eth0" Oct 13 00:29:05.133980 containerd[1881]: 2025-10-13 00:29:05.054 [INFO][5971] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58aecb4abb6fa01b9986d144114362a9004a50dcbf7079fb041fccf43e08d57e" HandleID="k8s-pod-network.58aecb4abb6fa01b9986d144114362a9004a50dcbf7079fb041fccf43e08d57e" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--745499777d--8zkrh-eth0" Oct 13 00:29:05.133980 containerd[1881]: 2025-10-13 00:29:05.055 [INFO][5971] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="58aecb4abb6fa01b9986d144114362a9004a50dcbf7079fb041fccf43e08d57e" HandleID="k8s-pod-network.58aecb4abb6fa01b9986d144114362a9004a50dcbf7079fb041fccf43e08d57e" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--745499777d--8zkrh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2ff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.1.0-a-27183f81a1", "pod":"calico-apiserver-745499777d-8zkrh", "timestamp":"2025-10-13 00:29:05.054776649 +0000 UTC"}, Hostname:"ci-4459.1.0-a-27183f81a1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 00:29:05.133980 containerd[1881]: 2025-10-13 00:29:05.055 [INFO][5971] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:29:05.133980 containerd[1881]: 2025-10-13 00:29:05.055 [INFO][5971] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:29:05.133980 containerd[1881]: 2025-10-13 00:29:05.055 [INFO][5971] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.1.0-a-27183f81a1' Oct 13 00:29:05.133980 containerd[1881]: 2025-10-13 00:29:05.062 [INFO][5971] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.58aecb4abb6fa01b9986d144114362a9004a50dcbf7079fb041fccf43e08d57e" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:29:05.133980 containerd[1881]: 2025-10-13 00:29:05.066 [INFO][5971] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.1.0-a-27183f81a1" Oct 13 00:29:05.133980 containerd[1881]: 2025-10-13 00:29:05.072 [INFO][5971] ipam/ipam.go 511: Trying affinity for 192.168.71.128/26 host="ci-4459.1.0-a-27183f81a1" Oct 13 00:29:05.133980 containerd[1881]: 2025-10-13 00:29:05.074 [INFO][5971] ipam/ipam.go 158: Attempting to load block cidr=192.168.71.128/26 host="ci-4459.1.0-a-27183f81a1" Oct 13 00:29:05.133980 containerd[1881]: 2025-10-13 00:29:05.079 [INFO][5971] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="ci-4459.1.0-a-27183f81a1" Oct 13 00:29:05.133980 containerd[1881]: 2025-10-13 00:29:05.079 [INFO][5971] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.58aecb4abb6fa01b9986d144114362a9004a50dcbf7079fb041fccf43e08d57e" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:29:05.133980 containerd[1881]: 2025-10-13 00:29:05.081 [INFO][5971] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.58aecb4abb6fa01b9986d144114362a9004a50dcbf7079fb041fccf43e08d57e Oct 13 00:29:05.133980 containerd[1881]: 2025-10-13 00:29:05.086 [INFO][5971] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.58aecb4abb6fa01b9986d144114362a9004a50dcbf7079fb041fccf43e08d57e" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:29:05.133980 containerd[1881]: 2025-10-13 00:29:05.096 [INFO][5971] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.71.138/26] block=192.168.71.128/26 handle="k8s-pod-network.58aecb4abb6fa01b9986d144114362a9004a50dcbf7079fb041fccf43e08d57e" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:29:05.133980 containerd[1881]: 2025-10-13 00:29:05.097 [INFO][5971] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.71.138/26] handle="k8s-pod-network.58aecb4abb6fa01b9986d144114362a9004a50dcbf7079fb041fccf43e08d57e" host="ci-4459.1.0-a-27183f81a1" Oct 13 00:29:05.133980 containerd[1881]: 2025-10-13 00:29:05.097 [INFO][5971] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:29:05.133980 containerd[1881]: 2025-10-13 00:29:05.097 [INFO][5971] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.71.138/26] IPv6=[] ContainerID="58aecb4abb6fa01b9986d144114362a9004a50dcbf7079fb041fccf43e08d57e" HandleID="k8s-pod-network.58aecb4abb6fa01b9986d144114362a9004a50dcbf7079fb041fccf43e08d57e" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--745499777d--8zkrh-eth0" Oct 13 00:29:05.134372 containerd[1881]: 2025-10-13 00:29:05.100 [INFO][5959] cni-plugin/k8s.go 418: Populated endpoint ContainerID="58aecb4abb6fa01b9986d144114362a9004a50dcbf7079fb041fccf43e08d57e" Namespace="calico-apiserver" Pod="calico-apiserver-745499777d-8zkrh" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--745499777d--8zkrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--745499777d--8zkrh-eth0", GenerateName:"calico-apiserver-745499777d-", Namespace:"calico-apiserver", SelfLink:"", UID:"e546cdde-9bd6-470d-b5d1-7c0b51565f0d", ResourceVersion:"1179", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 29, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"745499777d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-27183f81a1", ContainerID:"", Pod:"calico-apiserver-745499777d-8zkrh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib51c7a45b66", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:29:05.134372 containerd[1881]: 2025-10-13 00:29:05.100 [INFO][5959] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.138/32] ContainerID="58aecb4abb6fa01b9986d144114362a9004a50dcbf7079fb041fccf43e08d57e" Namespace="calico-apiserver" Pod="calico-apiserver-745499777d-8zkrh" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--745499777d--8zkrh-eth0" Oct 13 00:29:05.134372 containerd[1881]: 2025-10-13 00:29:05.100 [INFO][5959] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib51c7a45b66 ContainerID="58aecb4abb6fa01b9986d144114362a9004a50dcbf7079fb041fccf43e08d57e" Namespace="calico-apiserver" Pod="calico-apiserver-745499777d-8zkrh" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--745499777d--8zkrh-eth0" Oct 13 00:29:05.134372 containerd[1881]: 2025-10-13 00:29:05.109 [INFO][5959] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="58aecb4abb6fa01b9986d144114362a9004a50dcbf7079fb041fccf43e08d57e" Namespace="calico-apiserver" Pod="calico-apiserver-745499777d-8zkrh" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--745499777d--8zkrh-eth0" Oct 13 00:29:05.134372 containerd[1881]: 2025-10-13 00:29:05.110 [INFO][5959] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="58aecb4abb6fa01b9986d144114362a9004a50dcbf7079fb041fccf43e08d57e" Namespace="calico-apiserver" Pod="calico-apiserver-745499777d-8zkrh" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--745499777d--8zkrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--745499777d--8zkrh-eth0", GenerateName:"calico-apiserver-745499777d-", Namespace:"calico-apiserver", SelfLink:"", UID:"e546cdde-9bd6-470d-b5d1-7c0b51565f0d", ResourceVersion:"1179", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 0, 29, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"745499777d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.1.0-a-27183f81a1", ContainerID:"58aecb4abb6fa01b9986d144114362a9004a50dcbf7079fb041fccf43e08d57e", Pod:"calico-apiserver-745499777d-8zkrh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib51c7a45b66", MAC:"6e:af:cb:32:03:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 00:29:05.134372 containerd[1881]: 2025-10-13 00:29:05.129 [INFO][5959] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="58aecb4abb6fa01b9986d144114362a9004a50dcbf7079fb041fccf43e08d57e" Namespace="calico-apiserver" Pod="calico-apiserver-745499777d-8zkrh" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--745499777d--8zkrh-eth0" Oct 13 00:29:05.134902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1-rootfs.mount: Deactivated successfully. Oct 13 00:29:05.136175 containerd[1881]: time="2025-10-13T00:29:05.135148594Z" level=info msg="shim disconnected" id=b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1 namespace=k8s.io Oct 13 00:29:05.136175 containerd[1881]: time="2025-10-13T00:29:05.136122474Z" level=warning msg="cleaning up after shim disconnected" id=b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1 namespace=k8s.io Oct 13 00:29:05.136175 containerd[1881]: time="2025-10-13T00:29:05.136154651Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 13 00:29:05.178494 containerd[1881]: time="2025-10-13T00:29:05.178453417Z" level=info msg="received exit event sandbox_id:\"b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1\" exit_status:137 exited_at:{seconds:1760315345 nanos:88495534}" Oct 13 00:29:05.184332 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1-shm.mount: Deactivated successfully. Oct 13 00:29:05.195342 containerd[1881]: time="2025-10-13T00:29:05.195218325Z" level=info msg="connecting to shim 58aecb4abb6fa01b9986d144114362a9004a50dcbf7079fb041fccf43e08d57e" address="unix:///run/containerd/s/dd7f9b0dfce52a1f485275efe14af1fa85f3ce56bf7bb9b8f6e959d8704a13be" namespace=k8s.io protocol=ttrpc version=3 Oct 13 00:29:05.225080 systemd[1]: Started cri-containerd-58aecb4abb6fa01b9986d144114362a9004a50dcbf7079fb041fccf43e08d57e.scope - libcontainer container 58aecb4abb6fa01b9986d144114362a9004a50dcbf7079fb041fccf43e08d57e. Oct 13 00:29:05.267785 containerd[1881]: time="2025-10-13T00:29:05.267508414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-745499777d-8zkrh,Uid:e546cdde-9bd6-470d-b5d1-7c0b51565f0d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"58aecb4abb6fa01b9986d144114362a9004a50dcbf7079fb041fccf43e08d57e\"" Oct 13 00:29:05.272120 containerd[1881]: time="2025-10-13T00:29:05.272093116Z" level=info msg="CreateContainer within sandbox \"58aecb4abb6fa01b9986d144114362a9004a50dcbf7079fb041fccf43e08d57e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 00:29:05.275621 systemd-networkd[1476]: cali223f2308102: Link DOWN Oct 13 00:29:05.275816 systemd-networkd[1476]: cali223f2308102: Lost carrier Oct 13 00:29:05.296959 containerd[1881]: time="2025-10-13T00:29:05.296652710Z" level=info msg="Container 8e3bb985453e3ce5bdd94428a68d1914f9f349c415a48d39106e0bdf19a66755: CDI devices from CRI Config.CDIDevices: []" Oct 13 00:29:05.320550 containerd[1881]: time="2025-10-13T00:29:05.320444791Z" level=info msg="CreateContainer within sandbox \"58aecb4abb6fa01b9986d144114362a9004a50dcbf7079fb041fccf43e08d57e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8e3bb985453e3ce5bdd94428a68d1914f9f349c415a48d39106e0bdf19a66755\"" Oct 13 00:29:05.322050 containerd[1881]: time="2025-10-13T00:29:05.322015763Z" level=info msg="StartContainer for \"8e3bb985453e3ce5bdd94428a68d1914f9f349c415a48d39106e0bdf19a66755\"" Oct 13 00:29:05.323956 containerd[1881]: time="2025-10-13T00:29:05.322911616Z" level=info msg="connecting to shim 8e3bb985453e3ce5bdd94428a68d1914f9f349c415a48d39106e0bdf19a66755" address="unix:///run/containerd/s/dd7f9b0dfce52a1f485275efe14af1fa85f3ce56bf7bb9b8f6e959d8704a13be" protocol=ttrpc version=3 Oct 13 00:29:05.351105 systemd[1]: Started cri-containerd-8e3bb985453e3ce5bdd94428a68d1914f9f349c415a48d39106e0bdf19a66755.scope - libcontainer container 8e3bb985453e3ce5bdd94428a68d1914f9f349c415a48d39106e0bdf19a66755. Oct 13 00:29:05.387532 containerd[1881]: 2025-10-13 00:29:05.264 [INFO][6057] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" Oct 13 00:29:05.387532 containerd[1881]: 2025-10-13 00:29:05.264 [INFO][6057] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" iface="eth0" netns="/var/run/netns/cni-8363d77c-047d-2075-3f9d-a3e44ada204a" Oct 13 00:29:05.387532 containerd[1881]: 2025-10-13 00:29:05.264 [INFO][6057] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" iface="eth0" netns="/var/run/netns/cni-8363d77c-047d-2075-3f9d-a3e44ada204a" Oct 13 00:29:05.387532 containerd[1881]: 2025-10-13 00:29:05.289 [INFO][6057] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" after=24.50468ms iface="eth0" netns="/var/run/netns/cni-8363d77c-047d-2075-3f9d-a3e44ada204a" Oct 13 00:29:05.387532 containerd[1881]: 2025-10-13 00:29:05.289 [INFO][6057] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" Oct 13 00:29:05.387532 containerd[1881]: 2025-10-13 00:29:05.289 [INFO][6057] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" Oct 13 00:29:05.387532 containerd[1881]: 2025-10-13 00:29:05.326 [INFO][6091] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" HandleID="k8s-pod-network.b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--gx5fl-eth0" Oct 13 00:29:05.387532 containerd[1881]: 2025-10-13 00:29:05.326 [INFO][6091] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:29:05.387532 containerd[1881]: 2025-10-13 00:29:05.326 [INFO][6091] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:29:05.387532 containerd[1881]: 2025-10-13 00:29:05.380 [INFO][6091] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" HandleID="k8s-pod-network.b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--gx5fl-eth0" Oct 13 00:29:05.387532 containerd[1881]: 2025-10-13 00:29:05.381 [INFO][6091] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" HandleID="k8s-pod-network.b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--gx5fl-eth0" Oct 13 00:29:05.387532 containerd[1881]: 2025-10-13 00:29:05.382 [INFO][6091] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:29:05.387532 containerd[1881]: 2025-10-13 00:29:05.384 [INFO][6057] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" Oct 13 00:29:05.389973 containerd[1881]: time="2025-10-13T00:29:05.389924973Z" level=info msg="TearDown network for sandbox \"b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1\" successfully" Oct 13 00:29:05.389973 containerd[1881]: time="2025-10-13T00:29:05.389968446Z" level=info msg="StopPodSandbox for \"b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1\" returns successfully" Oct 13 00:29:05.447903 containerd[1881]: time="2025-10-13T00:29:05.447718309Z" level=info msg="StartContainer for \"8e3bb985453e3ce5bdd94428a68d1914f9f349c415a48d39106e0bdf19a66755\" returns successfully" Oct 13 00:29:05.574710 kubelet[3373]: I1013 00:29:05.574590 3373 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt6kb\" (UniqueName: \"kubernetes.io/projected/dd6494fc-6077-4e51-941e-0d0f0b6a8344-kube-api-access-mt6kb\") pod \"dd6494fc-6077-4e51-941e-0d0f0b6a8344\" (UID: \"dd6494fc-6077-4e51-941e-0d0f0b6a8344\") " Oct 13 00:29:05.574710 kubelet[3373]: I1013 00:29:05.574631 3373 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dd6494fc-6077-4e51-941e-0d0f0b6a8344-calico-apiserver-certs\") pod \"dd6494fc-6077-4e51-941e-0d0f0b6a8344\" (UID: \"dd6494fc-6077-4e51-941e-0d0f0b6a8344\") " Oct 13 00:29:05.579460 kubelet[3373]: I1013 00:29:05.579417 3373 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd6494fc-6077-4e51-941e-0d0f0b6a8344-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "dd6494fc-6077-4e51-941e-0d0f0b6a8344" (UID: "dd6494fc-6077-4e51-941e-0d0f0b6a8344"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 00:29:05.581198 kubelet[3373]: I1013 00:29:05.580589 3373 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd6494fc-6077-4e51-941e-0d0f0b6a8344-kube-api-access-mt6kb" (OuterVolumeSpecName: "kube-api-access-mt6kb") pod "dd6494fc-6077-4e51-941e-0d0f0b6a8344" (UID: "dd6494fc-6077-4e51-941e-0d0f0b6a8344"). InnerVolumeSpecName "kube-api-access-mt6kb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 00:29:05.584527 kubelet[3373]: I1013 00:29:05.584458 3373 scope.go:117] "RemoveContainer" containerID="d38e07152cd1025ef60dd62f187e1ab3ce435e0d19a941423aa92b6a22006605" Oct 13 00:29:05.591605 systemd[1]: Removed slice kubepods-besteffort-poddd6494fc_6077_4e51_941e_0d0f0b6a8344.slice - libcontainer container kubepods-besteffort-poddd6494fc_6077_4e51_941e_0d0f0b6a8344.slice. Oct 13 00:29:05.591676 systemd[1]: kubepods-besteffort-poddd6494fc_6077_4e51_941e_0d0f0b6a8344.slice: Consumed 1.105s CPU time, 43M memory peak. Oct 13 00:29:05.614424 containerd[1881]: time="2025-10-13T00:29:05.614233500Z" level=info msg="RemoveContainer for \"d38e07152cd1025ef60dd62f187e1ab3ce435e0d19a941423aa92b6a22006605\"" Oct 13 00:29:05.637102 kubelet[3373]: I1013 00:29:05.637037 3373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-745499777d-8zkrh" podStartSLOduration=2.637014509 podStartE2EDuration="2.637014509s" podCreationTimestamp="2025-10-13 00:29:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 00:29:05.634874799 +0000 UTC m=+97.695602133" watchObservedRunningTime="2025-10-13 00:29:05.637014509 +0000 UTC m=+97.697741835" Oct 13 00:29:05.641009 containerd[1881]: time="2025-10-13T00:29:05.640906420Z" level=info msg="RemoveContainer for \"d38e07152cd1025ef60dd62f187e1ab3ce435e0d19a941423aa92b6a22006605\" returns successfully" Oct 13 00:29:05.642143 kubelet[3373]: I1013 00:29:05.642101 3373 scope.go:117] "RemoveContainer" containerID="d38e07152cd1025ef60dd62f187e1ab3ce435e0d19a941423aa92b6a22006605" Oct 13 00:29:05.642427 containerd[1881]: time="2025-10-13T00:29:05.642323042Z" level=error msg="ContainerStatus for \"d38e07152cd1025ef60dd62f187e1ab3ce435e0d19a941423aa92b6a22006605\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d38e07152cd1025ef60dd62f187e1ab3ce435e0d19a941423aa92b6a22006605\": not found" Oct 13 00:29:05.642617 kubelet[3373]: E1013 00:29:05.642542 3373 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d38e07152cd1025ef60dd62f187e1ab3ce435e0d19a941423aa92b6a22006605\": not found" containerID="d38e07152cd1025ef60dd62f187e1ab3ce435e0d19a941423aa92b6a22006605" Oct 13 00:29:05.642758 kubelet[3373]: I1013 00:29:05.642699 3373 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d38e07152cd1025ef60dd62f187e1ab3ce435e0d19a941423aa92b6a22006605"} err="failed to get container status \"d38e07152cd1025ef60dd62f187e1ab3ce435e0d19a941423aa92b6a22006605\": rpc error: code = NotFound desc = an error occurred when try to find container \"d38e07152cd1025ef60dd62f187e1ab3ce435e0d19a941423aa92b6a22006605\": not found" Oct 13 00:29:05.675996 kubelet[3373]: I1013 00:29:05.675916 3373 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mt6kb\" (UniqueName: \"kubernetes.io/projected/dd6494fc-6077-4e51-941e-0d0f0b6a8344-kube-api-access-mt6kb\") on node \"ci-4459.1.0-a-27183f81a1\" DevicePath \"\"" Oct 13 00:29:05.676316 kubelet[3373]: I1013 00:29:05.676086 3373 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dd6494fc-6077-4e51-941e-0d0f0b6a8344-calico-apiserver-certs\") on node \"ci-4459.1.0-a-27183f81a1\" DevicePath \"\"" Oct 13 00:29:06.000671 systemd[1]: run-netns-cni\x2d8363d77c\x2d047d\x2d2075\x2d3f9d\x2da3e44ada204a.mount: Deactivated successfully. Oct 13 00:29:06.000764 systemd[1]: var-lib-kubelet-pods-dd6494fc\x2d6077\x2d4e51\x2d941e\x2d0d0f0b6a8344-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Oct 13 00:29:06.000807 systemd[1]: var-lib-kubelet-pods-dd6494fc\x2d6077\x2d4e51\x2d941e\x2d0d0f0b6a8344-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmt6kb.mount: Deactivated successfully. Oct 13 00:29:06.063545 kubelet[3373]: I1013 00:29:06.063507 3373 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd6494fc-6077-4e51-941e-0d0f0b6a8344" path="/var/lib/kubelet/pods/dd6494fc-6077-4e51-941e-0d0f0b6a8344/volumes" Oct 13 00:29:07.088569 containerd[1881]: time="2025-10-13T00:29:07.088317641Z" level=info msg="StopContainer for \"9327f0afa3cac0b285854593819691aec1cf73b4953882e61102539c17b3608c\" with timeout 30 (s)" Oct 13 00:29:07.090167 containerd[1881]: time="2025-10-13T00:29:07.090090012Z" level=info msg="Stop container \"9327f0afa3cac0b285854593819691aec1cf73b4953882e61102539c17b3608c\" with signal terminated" Oct 13 00:29:07.116594 systemd[1]: cri-containerd-9327f0afa3cac0b285854593819691aec1cf73b4953882e61102539c17b3608c.scope: Deactivated successfully. Oct 13 00:29:07.123779 containerd[1881]: time="2025-10-13T00:29:07.123640018Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9327f0afa3cac0b285854593819691aec1cf73b4953882e61102539c17b3608c\" id:\"9327f0afa3cac0b285854593819691aec1cf73b4953882e61102539c17b3608c\" pid:5815 exit_status:1 exited_at:{seconds:1760315347 nanos:120586684}" Oct 13 00:29:07.124319 containerd[1881]: time="2025-10-13T00:29:07.124111033Z" level=info msg="received exit event container_id:\"9327f0afa3cac0b285854593819691aec1cf73b4953882e61102539c17b3608c\" id:\"9327f0afa3cac0b285854593819691aec1cf73b4953882e61102539c17b3608c\" pid:5815 exit_status:1 exited_at:{seconds:1760315347 nanos:120586684}" Oct 13 00:29:07.141582 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9327f0afa3cac0b285854593819691aec1cf73b4953882e61102539c17b3608c-rootfs.mount: Deactivated successfully. Oct 13 00:29:07.166188 systemd-networkd[1476]: calib51c7a45b66: Gained IPv6LL Oct 13 00:29:07.191974 containerd[1881]: time="2025-10-13T00:29:07.191820001Z" level=info msg="StopContainer for \"9327f0afa3cac0b285854593819691aec1cf73b4953882e61102539c17b3608c\" returns successfully" Oct 13 00:29:07.193011 containerd[1881]: time="2025-10-13T00:29:07.192899493Z" level=info msg="StopPodSandbox for \"5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8\"" Oct 13 00:29:07.193275 containerd[1881]: time="2025-10-13T00:29:07.193246264Z" level=info msg="Container to stop \"9327f0afa3cac0b285854593819691aec1cf73b4953882e61102539c17b3608c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 00:29:07.203403 systemd[1]: cri-containerd-5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8.scope: Deactivated successfully. Oct 13 00:29:07.206282 containerd[1881]: time="2025-10-13T00:29:07.206259602Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8\" id:\"5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8\" pid:5413 exit_status:137 exited_at:{seconds:1760315347 nanos:204266943}" Oct 13 00:29:07.225173 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8-rootfs.mount: Deactivated successfully. Oct 13 00:29:07.225954 containerd[1881]: time="2025-10-13T00:29:07.225892239Z" level=info msg="shim disconnected" id=5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8 namespace=k8s.io Oct 13 00:29:07.227523 containerd[1881]: time="2025-10-13T00:29:07.225918000Z" level=warning msg="cleaning up after shim disconnected" id=5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8 namespace=k8s.io Oct 13 00:29:07.227523 containerd[1881]: time="2025-10-13T00:29:07.226599727Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 13 00:29:07.247745 containerd[1881]: time="2025-10-13T00:29:07.247713814Z" level=info msg="received exit event sandbox_id:\"5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8\" exit_status:137 exited_at:{seconds:1760315347 nanos:204266943}" Oct 13 00:29:07.251488 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8-shm.mount: Deactivated successfully. Oct 13 00:29:07.294057 systemd-networkd[1476]: calib5bffcc03ab: Link DOWN Oct 13 00:29:07.294063 systemd-networkd[1476]: calib5bffcc03ab: Lost carrier Oct 13 00:29:07.362398 containerd[1881]: 2025-10-13 00:29:07.291 [INFO][6214] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" Oct 13 00:29:07.362398 containerd[1881]: 2025-10-13 00:29:07.292 [INFO][6214] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" iface="eth0" netns="/var/run/netns/cni-11fe03e7-70c9-5180-6c04-e6ca7a28b39d" Oct 13 00:29:07.362398 containerd[1881]: 2025-10-13 00:29:07.292 [INFO][6214] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" iface="eth0" netns="/var/run/netns/cni-11fe03e7-70c9-5180-6c04-e6ca7a28b39d" Oct 13 00:29:07.362398 containerd[1881]: 2025-10-13 00:29:07.300 [INFO][6214] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" after=8.417312ms iface="eth0" netns="/var/run/netns/cni-11fe03e7-70c9-5180-6c04-e6ca7a28b39d" Oct 13 00:29:07.362398 containerd[1881]: 2025-10-13 00:29:07.300 [INFO][6214] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" Oct 13 00:29:07.362398 containerd[1881]: 2025-10-13 00:29:07.300 [INFO][6214] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" Oct 13 00:29:07.362398 containerd[1881]: 2025-10-13 00:29:07.322 [INFO][6223] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" HandleID="k8s-pod-network.5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--ctlg8-eth0" Oct 13 00:29:07.362398 containerd[1881]: 2025-10-13 00:29:07.323 [INFO][6223] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:29:07.362398 containerd[1881]: 2025-10-13 00:29:07.323 [INFO][6223] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:29:07.362398 containerd[1881]: 2025-10-13 00:29:07.358 [INFO][6223] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" HandleID="k8s-pod-network.5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--ctlg8-eth0" Oct 13 00:29:07.362398 containerd[1881]: 2025-10-13 00:29:07.358 [INFO][6223] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" HandleID="k8s-pod-network.5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--ctlg8-eth0" Oct 13 00:29:07.362398 containerd[1881]: 2025-10-13 00:29:07.359 [INFO][6223] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:29:07.362398 containerd[1881]: 2025-10-13 00:29:07.361 [INFO][6214] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" Oct 13 00:29:07.364785 systemd[1]: run-netns-cni\x2d11fe03e7\x2d70c9\x2d5180\x2d6c04\x2de6ca7a28b39d.mount: Deactivated successfully. Oct 13 00:29:07.367604 containerd[1881]: time="2025-10-13T00:29:07.367338871Z" level=info msg="TearDown network for sandbox \"5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8\" successfully" Oct 13 00:29:07.367604 containerd[1881]: time="2025-10-13T00:29:07.367370064Z" level=info msg="StopPodSandbox for \"5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8\" returns successfully" Oct 13 00:29:07.488991 kubelet[3373]: I1013 00:29:07.488523 3373 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d5df1242-3535-480f-83ed-f48fbf6f9e8f-calico-apiserver-certs\") pod \"d5df1242-3535-480f-83ed-f48fbf6f9e8f\" (UID: \"d5df1242-3535-480f-83ed-f48fbf6f9e8f\") " Oct 13 00:29:07.488991 kubelet[3373]: I1013 00:29:07.488586 3373 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql6t8\" (UniqueName: \"kubernetes.io/projected/d5df1242-3535-480f-83ed-f48fbf6f9e8f-kube-api-access-ql6t8\") pod \"d5df1242-3535-480f-83ed-f48fbf6f9e8f\" (UID: \"d5df1242-3535-480f-83ed-f48fbf6f9e8f\") " Oct 13 00:29:07.490762 kubelet[3373]: I1013 00:29:07.490737 3373 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5df1242-3535-480f-83ed-f48fbf6f9e8f-kube-api-access-ql6t8" (OuterVolumeSpecName: "kube-api-access-ql6t8") pod "d5df1242-3535-480f-83ed-f48fbf6f9e8f" (UID: "d5df1242-3535-480f-83ed-f48fbf6f9e8f"). InnerVolumeSpecName "kube-api-access-ql6t8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 00:29:07.491551 kubelet[3373]: I1013 00:29:07.491520 3373 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5df1242-3535-480f-83ed-f48fbf6f9e8f-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "d5df1242-3535-480f-83ed-f48fbf6f9e8f" (UID: "d5df1242-3535-480f-83ed-f48fbf6f9e8f"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 00:29:07.494004 systemd[1]: var-lib-kubelet-pods-d5df1242\x2d3535\x2d480f\x2d83ed\x2df48fbf6f9e8f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dql6t8.mount: Deactivated successfully. Oct 13 00:29:07.494086 systemd[1]: var-lib-kubelet-pods-d5df1242\x2d3535\x2d480f\x2d83ed\x2df48fbf6f9e8f-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Oct 13 00:29:07.589103 kubelet[3373]: I1013 00:29:07.589053 3373 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ql6t8\" (UniqueName: \"kubernetes.io/projected/d5df1242-3535-480f-83ed-f48fbf6f9e8f-kube-api-access-ql6t8\") on node \"ci-4459.1.0-a-27183f81a1\" DevicePath \"\"" Oct 13 00:29:07.589103 kubelet[3373]: I1013 00:29:07.589083 3373 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d5df1242-3535-480f-83ed-f48fbf6f9e8f-calico-apiserver-certs\") on node \"ci-4459.1.0-a-27183f81a1\" DevicePath \"\"" Oct 13 00:29:07.600568 kubelet[3373]: I1013 00:29:07.600092 3373 scope.go:117] "RemoveContainer" containerID="9327f0afa3cac0b285854593819691aec1cf73b4953882e61102539c17b3608c" Oct 13 00:29:07.602430 containerd[1881]: time="2025-10-13T00:29:07.601559296Z" level=info msg="RemoveContainer for \"9327f0afa3cac0b285854593819691aec1cf73b4953882e61102539c17b3608c\"" Oct 13 00:29:07.607035 systemd[1]: Removed slice kubepods-besteffort-podd5df1242_3535_480f_83ed_f48fbf6f9e8f.slice - libcontainer container kubepods-besteffort-podd5df1242_3535_480f_83ed_f48fbf6f9e8f.slice. Oct 13 00:29:07.614076 containerd[1881]: time="2025-10-13T00:29:07.613969125Z" level=info msg="RemoveContainer for \"9327f0afa3cac0b285854593819691aec1cf73b4953882e61102539c17b3608c\" returns successfully" Oct 13 00:29:07.614654 kubelet[3373]: I1013 00:29:07.614628 3373 scope.go:117] "RemoveContainer" containerID="9327f0afa3cac0b285854593819691aec1cf73b4953882e61102539c17b3608c" Oct 13 00:29:07.614984 kubelet[3373]: E1013 00:29:07.614915 3373 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9327f0afa3cac0b285854593819691aec1cf73b4953882e61102539c17b3608c\": not found" containerID="9327f0afa3cac0b285854593819691aec1cf73b4953882e61102539c17b3608c" Oct 13 00:29:07.615057 containerd[1881]: time="2025-10-13T00:29:07.614820306Z" level=error msg="ContainerStatus for \"9327f0afa3cac0b285854593819691aec1cf73b4953882e61102539c17b3608c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9327f0afa3cac0b285854593819691aec1cf73b4953882e61102539c17b3608c\": not found" Oct 13 00:29:07.615702 kubelet[3373]: I1013 00:29:07.614935 3373 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9327f0afa3cac0b285854593819691aec1cf73b4953882e61102539c17b3608c"} err="failed to get container status \"9327f0afa3cac0b285854593819691aec1cf73b4953882e61102539c17b3608c\": rpc error: code = NotFound desc = an error occurred when try to find container \"9327f0afa3cac0b285854593819691aec1cf73b4953882e61102539c17b3608c\": not found" Oct 13 00:29:08.062633 kubelet[3373]: I1013 00:29:08.062591 3373 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5df1242-3535-480f-83ed-f48fbf6f9e8f" path="/var/lib/kubelet/pods/d5df1242-3535-480f-83ed-f48fbf6f9e8f/volumes" Oct 13 00:29:14.878683 systemd[1]: Started sshd@7-10.200.20.34:22-10.200.16.10:52676.service - OpenSSH per-connection server daemon (10.200.16.10:52676). Oct 13 00:29:15.351967 sshd[6244]: Accepted publickey for core from 10.200.16.10 port 52676 ssh2: RSA SHA256:aubEDS8yZfNH2XbdzFIlpBCeXwKvWyi9x03sf6YxNU8 Oct 13 00:29:15.354160 sshd-session[6244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:29:15.360959 systemd-logind[1859]: New session 10 of user core. Oct 13 00:29:15.364606 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 13 00:29:15.801061 sshd[6247]: Connection closed by 10.200.16.10 port 52676 Oct 13 00:29:15.801789 sshd-session[6244]: pam_unix(sshd:session): session closed for user core Oct 13 00:29:15.807635 systemd[1]: sshd@7-10.200.20.34:22-10.200.16.10:52676.service: Deactivated successfully. Oct 13 00:29:15.811783 systemd[1]: session-10.scope: Deactivated successfully. Oct 13 00:29:15.813044 systemd-logind[1859]: Session 10 logged out. Waiting for processes to exit. Oct 13 00:29:15.814835 systemd-logind[1859]: Removed session 10. Oct 13 00:29:16.190156 containerd[1881]: time="2025-10-13T00:29:16.189788512Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94dea1e49efd0cf7b8dae40750c3818f20cc3b3ee85c4394161192dcf3877c68\" id:\"3351fe50d3498ae858d9bd124f42b8e3de4be9fb77c7163ecc86f8521fa9dbdc\" pid:6272 exited_at:{seconds:1760315356 nanos:189421804}" Oct 13 00:29:20.600412 containerd[1881]: time="2025-10-13T00:29:20.600303602Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94dea1e49efd0cf7b8dae40750c3818f20cc3b3ee85c4394161192dcf3877c68\" id:\"59f23a76c1df2fafd70ad1b7dd426787114244bbaee676be003288d651760f2a\" pid:6294 exited_at:{seconds:1760315360 nanos:599980311}" Oct 13 00:29:20.880142 systemd[1]: Started sshd@8-10.200.20.34:22-10.200.16.10:57376.service - OpenSSH per-connection server daemon (10.200.16.10:57376). Oct 13 00:29:21.295790 sshd[6308]: Accepted publickey for core from 10.200.16.10 port 57376 ssh2: RSA SHA256:aubEDS8yZfNH2XbdzFIlpBCeXwKvWyi9x03sf6YxNU8 Oct 13 00:29:21.297054 sshd-session[6308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:29:21.303989 systemd-logind[1859]: New session 11 of user core. Oct 13 00:29:21.307083 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 13 00:29:21.650486 sshd[6311]: Connection closed by 10.200.16.10 port 57376 Oct 13 00:29:21.650157 sshd-session[6308]: pam_unix(sshd:session): session closed for user core Oct 13 00:29:21.654446 systemd[1]: sshd@8-10.200.20.34:22-10.200.16.10:57376.service: Deactivated successfully. Oct 13 00:29:21.657491 systemd[1]: session-11.scope: Deactivated successfully. Oct 13 00:29:21.658540 systemd-logind[1859]: Session 11 logged out. Waiting for processes to exit. Oct 13 00:29:21.660634 systemd-logind[1859]: Removed session 11. Oct 13 00:29:23.277453 containerd[1881]: time="2025-10-13T00:29:23.277394860Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5cce6d6ad6e3b689c9464d26ed717e739885e9507e1b6e83ef1827561f9f51df\" id:\"89403eb68e2ccdb0d67ced1981f5e118b0cd875c6e3f52892df6c393451e990c\" pid:6336 exited_at:{seconds:1760315363 nanos:277134812}" Oct 13 00:29:23.615035 containerd[1881]: time="2025-10-13T00:29:23.614315660Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45c77385ae7eb8c27b4cfa56257be96084e6585d69ac249acbe58030584003f6\" id:\"0968ba434282e6adccb8ce6c54b63d0142d456ddc1adce65383827997498cc4a\" pid:6360 exited_at:{seconds:1760315363 nanos:613620973}" Oct 13 00:29:26.724550 systemd[1]: Started sshd@9-10.200.20.34:22-10.200.16.10:57386.service - OpenSSH per-connection server daemon (10.200.16.10:57386). Oct 13 00:29:27.147535 sshd[6372]: Accepted publickey for core from 10.200.16.10 port 57386 ssh2: RSA SHA256:aubEDS8yZfNH2XbdzFIlpBCeXwKvWyi9x03sf6YxNU8 Oct 13 00:29:27.148712 sshd-session[6372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:29:27.152563 systemd-logind[1859]: New session 12 of user core. Oct 13 00:29:27.157056 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 13 00:29:27.504240 sshd[6375]: Connection closed by 10.200.16.10 port 57386 Oct 13 00:29:27.504040 sshd-session[6372]: pam_unix(sshd:session): session closed for user core Oct 13 00:29:27.508636 systemd[1]: sshd@9-10.200.20.34:22-10.200.16.10:57386.service: Deactivated successfully. Oct 13 00:29:27.511840 systemd[1]: session-12.scope: Deactivated successfully. Oct 13 00:29:27.513028 systemd-logind[1859]: Session 12 logged out. Waiting for processes to exit. Oct 13 00:29:27.515247 systemd-logind[1859]: Removed session 12. Oct 13 00:29:27.583537 systemd[1]: Started sshd@10-10.200.20.34:22-10.200.16.10:57396.service - OpenSSH per-connection server daemon (10.200.16.10:57396). Oct 13 00:29:28.022965 sshd[6390]: Accepted publickey for core from 10.200.16.10 port 57396 ssh2: RSA SHA256:aubEDS8yZfNH2XbdzFIlpBCeXwKvWyi9x03sf6YxNU8 Oct 13 00:29:28.025238 sshd-session[6390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:29:28.033850 systemd-logind[1859]: New session 13 of user core. Oct 13 00:29:28.037984 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 13 00:29:28.057470 containerd[1881]: time="2025-10-13T00:29:28.057437125Z" level=info msg="StopPodSandbox for \"b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1\"" Oct 13 00:29:28.151677 containerd[1881]: 2025-10-13 00:29:28.109 [WARNING][6403] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--gx5fl-eth0" Oct 13 00:29:28.151677 containerd[1881]: 2025-10-13 00:29:28.109 [INFO][6403] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" Oct 13 00:29:28.151677 containerd[1881]: 2025-10-13 00:29:28.109 [INFO][6403] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" iface="eth0" netns="" Oct 13 00:29:28.151677 containerd[1881]: 2025-10-13 00:29:28.109 [INFO][6403] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" Oct 13 00:29:28.151677 containerd[1881]: 2025-10-13 00:29:28.109 [INFO][6403] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" Oct 13 00:29:28.151677 containerd[1881]: 2025-10-13 00:29:28.136 [INFO][6413] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" HandleID="k8s-pod-network.b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--gx5fl-eth0" Oct 13 00:29:28.151677 containerd[1881]: 2025-10-13 00:29:28.136 [INFO][6413] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:29:28.151677 containerd[1881]: 2025-10-13 00:29:28.137 [INFO][6413] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:29:28.151677 containerd[1881]: 2025-10-13 00:29:28.143 [WARNING][6413] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" HandleID="k8s-pod-network.b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--gx5fl-eth0" Oct 13 00:29:28.151677 containerd[1881]: 2025-10-13 00:29:28.144 [INFO][6413] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" HandleID="k8s-pod-network.b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--gx5fl-eth0" Oct 13 00:29:28.151677 containerd[1881]: 2025-10-13 00:29:28.145 [INFO][6413] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:29:28.151677 containerd[1881]: 2025-10-13 00:29:28.149 [INFO][6403] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" Oct 13 00:29:28.152394 containerd[1881]: time="2025-10-13T00:29:28.151682141Z" level=info msg="TearDown network for sandbox \"b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1\" successfully" Oct 13 00:29:28.152394 containerd[1881]: time="2025-10-13T00:29:28.151700821Z" level=info msg="StopPodSandbox for \"b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1\" returns successfully" Oct 13 00:29:28.154894 containerd[1881]: time="2025-10-13T00:29:28.154863381Z" level=info msg="RemovePodSandbox for \"b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1\"" Oct 13 00:29:28.156478 containerd[1881]: time="2025-10-13T00:29:28.156069700Z" level=info msg="Forcibly stopping sandbox \"b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1\"" Oct 13 00:29:28.244587 containerd[1881]: 2025-10-13 00:29:28.210 [WARNING][6427] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--gx5fl-eth0" Oct 13 00:29:28.244587 containerd[1881]: 2025-10-13 00:29:28.210 [INFO][6427] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" Oct 13 00:29:28.244587 containerd[1881]: 2025-10-13 00:29:28.210 [INFO][6427] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" iface="eth0" netns="" Oct 13 00:29:28.244587 containerd[1881]: 2025-10-13 00:29:28.211 [INFO][6427] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" Oct 13 00:29:28.244587 containerd[1881]: 2025-10-13 00:29:28.211 [INFO][6427] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" Oct 13 00:29:28.244587 containerd[1881]: 2025-10-13 00:29:28.230 [INFO][6434] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" HandleID="k8s-pod-network.b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--gx5fl-eth0" Oct 13 00:29:28.244587 containerd[1881]: 2025-10-13 00:29:28.230 [INFO][6434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:29:28.244587 containerd[1881]: 2025-10-13 00:29:28.230 [INFO][6434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:29:28.244587 containerd[1881]: 2025-10-13 00:29:28.237 [WARNING][6434] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" HandleID="k8s-pod-network.b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--gx5fl-eth0" Oct 13 00:29:28.244587 containerd[1881]: 2025-10-13 00:29:28.237 [INFO][6434] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" HandleID="k8s-pod-network.b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--gx5fl-eth0" Oct 13 00:29:28.244587 containerd[1881]: 2025-10-13 00:29:28.241 [INFO][6434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:29:28.244587 containerd[1881]: 2025-10-13 00:29:28.243 [INFO][6427] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1" Oct 13 00:29:28.245158 containerd[1881]: time="2025-10-13T00:29:28.244931836Z" level=info msg="TearDown network for sandbox \"b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1\" successfully" Oct 13 00:29:28.246821 containerd[1881]: time="2025-10-13T00:29:28.246771617Z" level=info msg="Ensure that sandbox b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1 in task-service has been cleanup successfully" Oct 13 00:29:28.258514 containerd[1881]: time="2025-10-13T00:29:28.258476632Z" level=info msg="RemovePodSandbox \"b78be9ae328e95202bd3d7704e7b8bb4aa6ffd6b26a5caaeaddaa7df0d7704b1\" returns successfully" Oct 13 00:29:28.259744 containerd[1881]: time="2025-10-13T00:29:28.259139766Z" level=info msg="StopPodSandbox for \"5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8\"" Oct 13 00:29:28.362691 containerd[1881]: 2025-10-13 00:29:28.312 [WARNING][6453] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--ctlg8-eth0" Oct 13 00:29:28.362691 containerd[1881]: 2025-10-13 00:29:28.313 [INFO][6453] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" Oct 13 00:29:28.362691 containerd[1881]: 2025-10-13 00:29:28.313 [INFO][6453] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" iface="eth0" netns="" Oct 13 00:29:28.362691 containerd[1881]: 2025-10-13 00:29:28.313 [INFO][6453] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" Oct 13 00:29:28.362691 containerd[1881]: 2025-10-13 00:29:28.313 [INFO][6453] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" Oct 13 00:29:28.362691 containerd[1881]: 2025-10-13 00:29:28.344 [INFO][6460] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" HandleID="k8s-pod-network.5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--ctlg8-eth0" Oct 13 00:29:28.362691 containerd[1881]: 2025-10-13 00:29:28.344 [INFO][6460] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:29:28.362691 containerd[1881]: 2025-10-13 00:29:28.344 [INFO][6460] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:29:28.362691 containerd[1881]: 2025-10-13 00:29:28.356 [WARNING][6460] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" HandleID="k8s-pod-network.5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--ctlg8-eth0" Oct 13 00:29:28.362691 containerd[1881]: 2025-10-13 00:29:28.356 [INFO][6460] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" HandleID="k8s-pod-network.5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--ctlg8-eth0" Oct 13 00:29:28.362691 containerd[1881]: 2025-10-13 00:29:28.358 [INFO][6460] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:29:28.362691 containerd[1881]: 2025-10-13 00:29:28.360 [INFO][6453] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" Oct 13 00:29:28.363266 containerd[1881]: time="2025-10-13T00:29:28.362991297Z" level=info msg="TearDown network for sandbox \"5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8\" successfully" Oct 13 00:29:28.363266 containerd[1881]: time="2025-10-13T00:29:28.363020618Z" level=info msg="StopPodSandbox for \"5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8\" returns successfully" Oct 13 00:29:28.364490 containerd[1881]: time="2025-10-13T00:29:28.364468849Z" level=info msg="RemovePodSandbox for \"5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8\"" Oct 13 00:29:28.364976 containerd[1881]: time="2025-10-13T00:29:28.364676120Z" level=info msg="Forcibly stopping sandbox \"5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8\"" Oct 13 00:29:28.463094 containerd[1881]: 2025-10-13 00:29:28.423 [WARNING][6474] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" WorkloadEndpoint="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--ctlg8-eth0" Oct 13 00:29:28.463094 containerd[1881]: 2025-10-13 00:29:28.423 [INFO][6474] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" Oct 13 00:29:28.463094 containerd[1881]: 2025-10-13 00:29:28.423 [INFO][6474] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" iface="eth0" netns="" Oct 13 00:29:28.463094 containerd[1881]: 2025-10-13 00:29:28.423 [INFO][6474] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" Oct 13 00:29:28.463094 containerd[1881]: 2025-10-13 00:29:28.423 [INFO][6474] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" Oct 13 00:29:28.463094 containerd[1881]: 2025-10-13 00:29:28.453 [INFO][6482] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" HandleID="k8s-pod-network.5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--ctlg8-eth0" Oct 13 00:29:28.463094 containerd[1881]: 2025-10-13 00:29:28.453 [INFO][6482] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 00:29:28.463094 containerd[1881]: 2025-10-13 00:29:28.453 [INFO][6482] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 00:29:28.463094 containerd[1881]: 2025-10-13 00:29:28.458 [WARNING][6482] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" HandleID="k8s-pod-network.5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--ctlg8-eth0" Oct 13 00:29:28.463094 containerd[1881]: 2025-10-13 00:29:28.458 [INFO][6482] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" HandleID="k8s-pod-network.5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" Workload="ci--4459.1.0--a--27183f81a1-k8s-calico--apiserver--6d4d7db98f--ctlg8-eth0" Oct 13 00:29:28.463094 containerd[1881]: 2025-10-13 00:29:28.458 [INFO][6482] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 00:29:28.463094 containerd[1881]: 2025-10-13 00:29:28.460 [INFO][6474] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8" Oct 13 00:29:28.463626 containerd[1881]: time="2025-10-13T00:29:28.463433420Z" level=info msg="TearDown network for sandbox \"5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8\" successfully" Oct 13 00:29:28.466047 containerd[1881]: time="2025-10-13T00:29:28.466026265Z" level=info msg="Ensure that sandbox 5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8 in task-service has been cleanup successfully" Oct 13 00:29:28.466900 sshd[6393]: Connection closed by 10.200.16.10 port 57396 Oct 13 00:29:28.467527 sshd-session[6390]: pam_unix(sshd:session): session closed for user core Oct 13 00:29:28.471615 systemd[1]: sshd@10-10.200.20.34:22-10.200.16.10:57396.service: Deactivated successfully. Oct 13 00:29:28.474854 systemd[1]: session-13.scope: Deactivated successfully. Oct 13 00:29:28.475896 systemd-logind[1859]: Session 13 logged out. Waiting for processes to exit. Oct 13 00:29:28.479195 systemd-logind[1859]: Removed session 13. Oct 13 00:29:28.481153 containerd[1881]: time="2025-10-13T00:29:28.481122471Z" level=info msg="RemovePodSandbox \"5fdc1bee8a7fb128bebdd655e60449fb3c4ee203e1c8bda5fd20325ec41b53c8\" returns successfully" Oct 13 00:29:28.547112 systemd[1]: Started sshd@11-10.200.20.34:22-10.200.16.10:57400.service - OpenSSH per-connection server daemon (10.200.16.10:57400). Oct 13 00:29:28.984241 sshd[6492]: Accepted publickey for core from 10.200.16.10 port 57400 ssh2: RSA SHA256:aubEDS8yZfNH2XbdzFIlpBCeXwKvWyi9x03sf6YxNU8 Oct 13 00:29:28.985343 sshd-session[6492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:29:28.989213 systemd-logind[1859]: New session 14 of user core. Oct 13 00:29:28.994075 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 13 00:29:29.358965 sshd[6495]: Connection closed by 10.200.16.10 port 57400 Oct 13 00:29:29.359194 sshd-session[6492]: pam_unix(sshd:session): session closed for user core Oct 13 00:29:29.362195 systemd-logind[1859]: Session 14 logged out. Waiting for processes to exit. Oct 13 00:29:29.362599 systemd[1]: sshd@11-10.200.20.34:22-10.200.16.10:57400.service: Deactivated successfully. Oct 13 00:29:29.364717 systemd[1]: session-14.scope: Deactivated successfully. Oct 13 00:29:29.366321 systemd-logind[1859]: Removed session 14. Oct 13 00:29:34.432922 systemd[1]: Started sshd@12-10.200.20.34:22-10.200.16.10:47558.service - OpenSSH per-connection server daemon (10.200.16.10:47558). Oct 13 00:29:34.865931 sshd[6511]: Accepted publickey for core from 10.200.16.10 port 47558 ssh2: RSA SHA256:aubEDS8yZfNH2XbdzFIlpBCeXwKvWyi9x03sf6YxNU8 Oct 13 00:29:34.867029 sshd-session[6511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:29:34.871972 systemd-logind[1859]: New session 15 of user core. Oct 13 00:29:34.878075 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 13 00:29:35.212181 sshd[6514]: Connection closed by 10.200.16.10 port 47558 Oct 13 00:29:35.212505 sshd-session[6511]: pam_unix(sshd:session): session closed for user core Oct 13 00:29:35.215810 systemd-logind[1859]: Session 15 logged out. Waiting for processes to exit. Oct 13 00:29:35.215969 systemd[1]: sshd@12-10.200.20.34:22-10.200.16.10:47558.service: Deactivated successfully. Oct 13 00:29:35.217439 systemd[1]: session-15.scope: Deactivated successfully. Oct 13 00:29:35.220290 systemd-logind[1859]: Removed session 15. Oct 13 00:29:40.295781 systemd[1]: Started sshd@13-10.200.20.34:22-10.200.16.10:37164.service - OpenSSH per-connection server daemon (10.200.16.10:37164). Oct 13 00:29:40.717026 sshd[6527]: Accepted publickey for core from 10.200.16.10 port 37164 ssh2: RSA SHA256:aubEDS8yZfNH2XbdzFIlpBCeXwKvWyi9x03sf6YxNU8 Oct 13 00:29:40.718097 sshd-session[6527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:29:40.721637 systemd-logind[1859]: New session 16 of user core. Oct 13 00:29:40.731231 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 13 00:29:41.066700 sshd[6530]: Connection closed by 10.200.16.10 port 37164 Oct 13 00:29:41.066609 sshd-session[6527]: pam_unix(sshd:session): session closed for user core Oct 13 00:29:41.070764 systemd[1]: sshd@13-10.200.20.34:22-10.200.16.10:37164.service: Deactivated successfully. Oct 13 00:29:41.072698 systemd[1]: session-16.scope: Deactivated successfully. Oct 13 00:29:41.073586 systemd-logind[1859]: Session 16 logged out. Waiting for processes to exit. Oct 13 00:29:41.075478 systemd-logind[1859]: Removed session 16. Oct 13 00:29:46.148156 systemd[1]: Started sshd@14-10.200.20.34:22-10.200.16.10:37178.service - OpenSSH per-connection server daemon (10.200.16.10:37178). Oct 13 00:29:46.569750 sshd[6548]: Accepted publickey for core from 10.200.16.10 port 37178 ssh2: RSA SHA256:aubEDS8yZfNH2XbdzFIlpBCeXwKvWyi9x03sf6YxNU8 Oct 13 00:29:46.572096 sshd-session[6548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:29:46.577551 systemd-logind[1859]: New session 17 of user core. Oct 13 00:29:46.582067 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 13 00:29:46.920818 sshd[6551]: Connection closed by 10.200.16.10 port 37178 Oct 13 00:29:46.921391 sshd-session[6548]: pam_unix(sshd:session): session closed for user core Oct 13 00:29:46.924786 systemd[1]: sshd@14-10.200.20.34:22-10.200.16.10:37178.service: Deactivated successfully. Oct 13 00:29:46.927356 systemd[1]: session-17.scope: Deactivated successfully. Oct 13 00:29:46.929130 systemd-logind[1859]: Session 17 logged out. Waiting for processes to exit. Oct 13 00:29:46.930141 systemd-logind[1859]: Removed session 17. Oct 13 00:29:46.998775 systemd[1]: Started sshd@15-10.200.20.34:22-10.200.16.10:37192.service - OpenSSH per-connection server daemon (10.200.16.10:37192). Oct 13 00:29:47.421162 sshd[6563]: Accepted publickey for core from 10.200.16.10 port 37192 ssh2: RSA SHA256:aubEDS8yZfNH2XbdzFIlpBCeXwKvWyi9x03sf6YxNU8 Oct 13 00:29:47.422559 sshd-session[6563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:29:47.427677 systemd-logind[1859]: New session 18 of user core. Oct 13 00:29:47.435083 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 13 00:29:47.907416 sshd[6566]: Connection closed by 10.200.16.10 port 37192 Oct 13 00:29:47.907874 sshd-session[6563]: pam_unix(sshd:session): session closed for user core Oct 13 00:29:47.911465 systemd[1]: sshd@15-10.200.20.34:22-10.200.16.10:37192.service: Deactivated successfully. Oct 13 00:29:47.914087 systemd[1]: session-18.scope: Deactivated successfully. Oct 13 00:29:47.915558 systemd-logind[1859]: Session 18 logged out. Waiting for processes to exit. Oct 13 00:29:47.916856 systemd-logind[1859]: Removed session 18. Oct 13 00:29:47.988161 systemd[1]: Started sshd@16-10.200.20.34:22-10.200.16.10:37202.service - OpenSSH per-connection server daemon (10.200.16.10:37202). Oct 13 00:29:48.421639 sshd[6577]: Accepted publickey for core from 10.200.16.10 port 37202 ssh2: RSA SHA256:aubEDS8yZfNH2XbdzFIlpBCeXwKvWyi9x03sf6YxNU8 Oct 13 00:29:48.422693 sshd-session[6577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:29:48.426790 systemd-logind[1859]: New session 19 of user core. Oct 13 00:29:48.434048 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 13 00:29:49.196501 sshd[6580]: Connection closed by 10.200.16.10 port 37202 Oct 13 00:29:49.196421 sshd-session[6577]: pam_unix(sshd:session): session closed for user core Oct 13 00:29:49.200810 systemd-logind[1859]: Session 19 logged out. Waiting for processes to exit. Oct 13 00:29:49.200879 systemd[1]: sshd@16-10.200.20.34:22-10.200.16.10:37202.service: Deactivated successfully. Oct 13 00:29:49.203296 systemd[1]: session-19.scope: Deactivated successfully. Oct 13 00:29:49.205227 systemd-logind[1859]: Removed session 19. Oct 13 00:29:49.277132 systemd[1]: Started sshd@17-10.200.20.34:22-10.200.16.10:37208.service - OpenSSH per-connection server daemon (10.200.16.10:37208). Oct 13 00:29:49.705980 sshd[6597]: Accepted publickey for core from 10.200.16.10 port 37208 ssh2: RSA SHA256:aubEDS8yZfNH2XbdzFIlpBCeXwKvWyi9x03sf6YxNU8 Oct 13 00:29:49.707051 sshd-session[6597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:29:49.710608 systemd-logind[1859]: New session 20 of user core. Oct 13 00:29:49.721053 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 13 00:29:50.145081 sshd[6600]: Connection closed by 10.200.16.10 port 37208 Oct 13 00:29:50.145619 sshd-session[6597]: pam_unix(sshd:session): session closed for user core Oct 13 00:29:50.148640 systemd[1]: sshd@17-10.200.20.34:22-10.200.16.10:37208.service: Deactivated successfully. Oct 13 00:29:50.150103 systemd[1]: session-20.scope: Deactivated successfully. Oct 13 00:29:50.150747 systemd-logind[1859]: Session 20 logged out. Waiting for processes to exit. Oct 13 00:29:50.151970 systemd-logind[1859]: Removed session 20. Oct 13 00:29:50.225549 systemd[1]: Started sshd@18-10.200.20.34:22-10.200.16.10:39286.service - OpenSSH per-connection server daemon (10.200.16.10:39286). Oct 13 00:29:50.535706 containerd[1881]: time="2025-10-13T00:29:50.535670758Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94dea1e49efd0cf7b8dae40750c3818f20cc3b3ee85c4394161192dcf3877c68\" id:\"eb1c994fdf24da1e8870366f9ab4d41ee11254bdfb078c8b28c42501fd190915\" pid:6626 exited_at:{seconds:1760315390 nanos:535318042}" Oct 13 00:29:50.656754 sshd[6610]: Accepted publickey for core from 10.200.16.10 port 39286 ssh2: RSA SHA256:aubEDS8yZfNH2XbdzFIlpBCeXwKvWyi9x03sf6YxNU8 Oct 13 00:29:50.658114 sshd-session[6610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:29:50.661986 systemd-logind[1859]: New session 21 of user core. Oct 13 00:29:50.668069 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 13 00:29:51.045766 sshd[6634]: Connection closed by 10.200.16.10 port 39286 Oct 13 00:29:51.046138 sshd-session[6610]: pam_unix(sshd:session): session closed for user core Oct 13 00:29:51.050177 systemd-logind[1859]: Session 21 logged out. Waiting for processes to exit. Oct 13 00:29:51.051656 systemd[1]: sshd@18-10.200.20.34:22-10.200.16.10:39286.service: Deactivated successfully. Oct 13 00:29:51.055049 systemd[1]: session-21.scope: Deactivated successfully. Oct 13 00:29:51.057192 systemd-logind[1859]: Removed session 21. Oct 13 00:29:52.022982 containerd[1881]: time="2025-10-13T00:29:52.022918507Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45c77385ae7eb8c27b4cfa56257be96084e6585d69ac249acbe58030584003f6\" id:\"2ea8aac195b7ddc0d9cd3223da1b38b7cf2d06f447f5ea6b4efb5b405d37128a\" pid:6657 exited_at:{seconds:1760315392 nanos:22750782}" Oct 13 00:29:53.267930 containerd[1881]: time="2025-10-13T00:29:53.267888652Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5cce6d6ad6e3b689c9464d26ed717e739885e9507e1b6e83ef1827561f9f51df\" id:\"a473cc65d578b81257712b46e1e03c7d718d53cf02a407779f65c5ab64b332bf\" pid:6678 exited_at:{seconds:1760315393 nanos:267628075}" Oct 13 00:29:53.523536 containerd[1881]: time="2025-10-13T00:29:53.523500098Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45c77385ae7eb8c27b4cfa56257be96084e6585d69ac249acbe58030584003f6\" id:\"3ab4358896edc4f207ceefd45c9f0aeb3a6e3540ab48bafdf00d930549d925b1\" pid:6701 exited_at:{seconds:1760315393 nanos:523106221}" Oct 13 00:29:56.125134 systemd[1]: Started sshd@19-10.200.20.34:22-10.200.16.10:39292.service - OpenSSH per-connection server daemon (10.200.16.10:39292). Oct 13 00:29:56.558367 sshd[6720]: Accepted publickey for core from 10.200.16.10 port 39292 ssh2: RSA SHA256:aubEDS8yZfNH2XbdzFIlpBCeXwKvWyi9x03sf6YxNU8 Oct 13 00:29:56.559871 sshd-session[6720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:29:56.563739 systemd-logind[1859]: New session 22 of user core. Oct 13 00:29:56.571096 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 13 00:29:56.921803 sshd[6723]: Connection closed by 10.200.16.10 port 39292 Oct 13 00:29:56.920792 sshd-session[6720]: pam_unix(sshd:session): session closed for user core Oct 13 00:29:56.923601 systemd-logind[1859]: Session 22 logged out. Waiting for processes to exit. Oct 13 00:29:56.924420 systemd[1]: sshd@19-10.200.20.34:22-10.200.16.10:39292.service: Deactivated successfully. Oct 13 00:29:56.925914 systemd[1]: session-22.scope: Deactivated successfully. Oct 13 00:29:56.927237 systemd-logind[1859]: Removed session 22. Oct 13 00:30:02.019165 systemd[1]: Started sshd@20-10.200.20.34:22-10.200.16.10:45224.service - OpenSSH per-connection server daemon (10.200.16.10:45224). Oct 13 00:30:02.475250 sshd[6752]: Accepted publickey for core from 10.200.16.10 port 45224 ssh2: RSA SHA256:aubEDS8yZfNH2XbdzFIlpBCeXwKvWyi9x03sf6YxNU8 Oct 13 00:30:02.476420 sshd-session[6752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:30:02.482273 systemd-logind[1859]: New session 23 of user core. Oct 13 00:30:02.489096 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 13 00:30:02.863546 sshd[6755]: Connection closed by 10.200.16.10 port 45224 Oct 13 00:30:02.864178 sshd-session[6752]: pam_unix(sshd:session): session closed for user core Oct 13 00:30:02.867982 systemd-logind[1859]: Session 23 logged out. Waiting for processes to exit. Oct 13 00:30:02.868363 systemd[1]: sshd@20-10.200.20.34:22-10.200.16.10:45224.service: Deactivated successfully. Oct 13 00:30:02.871438 systemd[1]: session-23.scope: Deactivated successfully. Oct 13 00:30:02.873173 systemd-logind[1859]: Removed session 23. Oct 13 00:30:07.948630 systemd[1]: Started sshd@21-10.200.20.34:22-10.200.16.10:45228.service - OpenSSH per-connection server daemon (10.200.16.10:45228). Oct 13 00:30:08.402759 sshd[6768]: Accepted publickey for core from 10.200.16.10 port 45228 ssh2: RSA SHA256:aubEDS8yZfNH2XbdzFIlpBCeXwKvWyi9x03sf6YxNU8 Oct 13 00:30:08.403924 sshd-session[6768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:30:08.407851 systemd-logind[1859]: New session 24 of user core. Oct 13 00:30:08.416204 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 13 00:30:08.791854 sshd[6771]: Connection closed by 10.200.16.10 port 45228 Oct 13 00:30:08.792234 sshd-session[6768]: pam_unix(sshd:session): session closed for user core Oct 13 00:30:08.798686 systemd[1]: sshd@21-10.200.20.34:22-10.200.16.10:45228.service: Deactivated successfully. Oct 13 00:30:08.800736 systemd[1]: session-24.scope: Deactivated successfully. Oct 13 00:30:08.803698 systemd-logind[1859]: Session 24 logged out. Waiting for processes to exit. Oct 13 00:30:08.806334 systemd-logind[1859]: Removed session 24. Oct 13 00:30:13.861387 systemd[1]: Started sshd@22-10.200.20.34:22-10.200.16.10:42456.service - OpenSSH per-connection server daemon (10.200.16.10:42456). Oct 13 00:30:14.274194 sshd[6783]: Accepted publickey for core from 10.200.16.10 port 42456 ssh2: RSA SHA256:aubEDS8yZfNH2XbdzFIlpBCeXwKvWyi9x03sf6YxNU8 Oct 13 00:30:14.275163 sshd-session[6783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:30:14.278709 systemd-logind[1859]: New session 25 of user core. Oct 13 00:30:14.287232 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 13 00:30:14.629708 sshd[6786]: Connection closed by 10.200.16.10 port 42456 Oct 13 00:30:14.630388 sshd-session[6783]: pam_unix(sshd:session): session closed for user core Oct 13 00:30:14.633576 systemd[1]: sshd@22-10.200.20.34:22-10.200.16.10:42456.service: Deactivated successfully. Oct 13 00:30:14.635772 systemd[1]: session-25.scope: Deactivated successfully. Oct 13 00:30:14.636790 systemd-logind[1859]: Session 25 logged out. Waiting for processes to exit. Oct 13 00:30:14.638965 systemd-logind[1859]: Removed session 25. Oct 13 00:30:16.190001 containerd[1881]: time="2025-10-13T00:30:16.189957114Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94dea1e49efd0cf7b8dae40750c3818f20cc3b3ee85c4394161192dcf3877c68\" id:\"10abe91de9a8dc312d84ebb016c97d397fff1c10e092894c10c82eb754161284\" pid:6810 exited_at:{seconds:1760315416 nanos:189643920}" Oct 13 00:30:19.707238 systemd[1]: Started sshd@23-10.200.20.34:22-10.200.16.10:42466.service - OpenSSH per-connection server daemon (10.200.16.10:42466). Oct 13 00:30:20.122160 sshd[6821]: Accepted publickey for core from 10.200.16.10 port 42466 ssh2: RSA SHA256:aubEDS8yZfNH2XbdzFIlpBCeXwKvWyi9x03sf6YxNU8 Oct 13 00:30:20.123287 sshd-session[6821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:30:20.127173 systemd-logind[1859]: New session 26 of user core. Oct 13 00:30:20.136081 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 13 00:30:20.488120 sshd[6824]: Connection closed by 10.200.16.10 port 42466 Oct 13 00:30:20.489404 sshd-session[6821]: pam_unix(sshd:session): session closed for user core Oct 13 00:30:20.495013 systemd[1]: sshd@23-10.200.20.34:22-10.200.16.10:42466.service: Deactivated successfully. Oct 13 00:30:20.498660 systemd[1]: session-26.scope: Deactivated successfully. Oct 13 00:30:20.501038 systemd-logind[1859]: Session 26 logged out. Waiting for processes to exit. Oct 13 00:30:20.502296 systemd-logind[1859]: Removed session 26. Oct 13 00:30:20.542988 containerd[1881]: time="2025-10-13T00:30:20.542932807Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94dea1e49efd0cf7b8dae40750c3818f20cc3b3ee85c4394161192dcf3877c68\" id:\"57d0843ca4cb343f268552724b883de3161bb909e3a456014d801f3897702e3b\" pid:6846 exited_at:{seconds:1760315420 nanos:542480776}" Oct 13 00:30:23.331759 containerd[1881]: time="2025-10-13T00:30:23.331658270Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5cce6d6ad6e3b689c9464d26ed717e739885e9507e1b6e83ef1827561f9f51df\" id:\"751fef92a96b939129d04a878f4015539903e09a88f9c9a1a908d17bb34c9468\" pid:6868 exited_at:{seconds:1760315423 nanos:331323547}" Oct 13 00:30:23.526362 containerd[1881]: time="2025-10-13T00:30:23.526319361Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45c77385ae7eb8c27b4cfa56257be96084e6585d69ac249acbe58030584003f6\" id:\"686ab1fe938dffefd9758c9b94355cc0a1440c3773885c6502971570db340d98\" pid:6896 exited_at:{seconds:1760315423 nanos:525823513}" Oct 13 00:30:25.568233 systemd[1]: Started sshd@24-10.200.20.34:22-10.200.16.10:41732.service - OpenSSH per-connection server daemon (10.200.16.10:41732). Oct 13 00:30:25.995570 sshd[6906]: Accepted publickey for core from 10.200.16.10 port 41732 ssh2: RSA SHA256:aubEDS8yZfNH2XbdzFIlpBCeXwKvWyi9x03sf6YxNU8 Oct 13 00:30:25.996550 sshd-session[6906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 00:30:26.000592 systemd-logind[1859]: New session 27 of user core. Oct 13 00:30:26.007066 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 13 00:30:26.362377 sshd[6910]: Connection closed by 10.200.16.10 port 41732 Oct 13 00:30:26.362882 sshd-session[6906]: pam_unix(sshd:session): session closed for user core Oct 13 00:30:26.366401 systemd[1]: sshd@24-10.200.20.34:22-10.200.16.10:41732.service: Deactivated successfully. Oct 13 00:30:26.366688 systemd-logind[1859]: Session 27 logged out. Waiting for processes to exit. Oct 13 00:30:26.369385 systemd[1]: session-27.scope: Deactivated successfully. Oct 13 00:30:26.373184 systemd-logind[1859]: Removed session 27.