Sep 9 04:54:32.031071 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Sep 9 04:54:32.031090 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Sep 9 03:38:34 -00 2025 Sep 9 04:54:32.031096 kernel: KASLR enabled Sep 9 04:54:32.031100 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 9 04:54:32.031105 kernel: printk: legacy bootconsole [pl11] enabled Sep 9 04:54:32.031108 kernel: efi: EFI v2.7 by EDK II Sep 9 04:54:32.031113 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20f698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Sep 9 04:54:32.031117 kernel: random: crng init done Sep 9 04:54:32.031121 kernel: secureboot: Secure boot disabled Sep 9 04:54:32.031125 kernel: ACPI: Early table checksum verification disabled Sep 9 04:54:32.031129 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 9 04:54:32.031133 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 04:54:32.031137 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 04:54:32.031142 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 9 04:54:32.031147 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 04:54:32.031151 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 04:54:32.031155 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 04:54:32.031160 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 04:54:32.031165 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 04:54:32.031169 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 04:54:32.031173 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 9 04:54:32.031177 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 9 04:54:32.031182 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 9 04:54:32.031186 kernel: ACPI: Use ACPI SPCR as default console: No Sep 9 04:54:32.031190 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Sep 9 04:54:32.031194 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Sep 9 04:54:32.031198 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Sep 9 04:54:32.031202 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Sep 9 04:54:32.031207 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Sep 9 04:54:32.031212 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Sep 9 04:54:32.031216 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Sep 9 04:54:32.031220 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Sep 9 04:54:32.031224 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Sep 9 04:54:32.031228 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Sep 9 04:54:32.031232 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Sep 9 04:54:32.031236 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Sep 9 04:54:32.031241 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Sep 9 04:54:32.031245 kernel: NODE_DATA(0) allocated [mem 0x1bf7fda00-0x1bf804fff] Sep 9 04:54:32.031249 kernel: Zone ranges: Sep 9 04:54:32.031253 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 9 04:54:32.031260 kernel: DMA32 empty Sep 9 04:54:32.031264 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 9 04:54:32.031269 kernel: Device empty Sep 9 04:54:32.031273 kernel: Movable zone start for each node Sep 9 04:54:32.031277 kernel: Early memory node ranges Sep 9 04:54:32.031282 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 9 04:54:32.031287 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Sep 9 04:54:32.031291 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Sep 9 04:54:32.031295 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Sep 9 04:54:32.031300 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 9 04:54:32.031304 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 9 04:54:32.031308 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 9 04:54:32.031312 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 9 04:54:32.031317 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 9 04:54:32.031321 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 9 04:54:32.031325 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 9 04:54:32.031330 kernel: cma: Reserved 16 MiB at 0x000000003d400000 on node -1 Sep 9 04:54:32.031335 kernel: psci: probing for conduit method from ACPI. Sep 9 04:54:32.031339 kernel: psci: PSCIv1.1 detected in firmware. Sep 9 04:54:32.031344 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 04:54:32.031348 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 9 04:54:32.031352 kernel: psci: SMC Calling Convention v1.4 Sep 9 04:54:32.031357 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Sep 9 04:54:32.031361 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Sep 9 04:54:32.031365 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 9 04:54:32.031369 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 9 04:54:32.031374 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 9 04:54:32.031378 kernel: Detected PIPT I-cache on CPU0 Sep 9 04:54:32.031383 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Sep 9 04:54:32.031388 kernel: CPU features: detected: GIC system register CPU interface Sep 9 04:54:32.031392 kernel: CPU features: detected: Spectre-v4 Sep 9 04:54:32.031396 kernel: CPU features: detected: Spectre-BHB Sep 9 04:54:32.031401 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 9 04:54:32.031405 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 9 04:54:32.031409 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Sep 9 04:54:32.031414 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 9 04:54:32.031418 kernel: alternatives: applying boot alternatives Sep 9 04:54:32.031423 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=1e9320fd787e27d01e3b8a1acb67e0c640346112c469b7a652e9dcfc9271bf90 Sep 9 04:54:32.031428 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 04:54:32.031433 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 04:54:32.031438 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 04:54:32.031442 kernel: Fallback order for Node 0: 0 Sep 9 04:54:32.031446 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Sep 9 04:54:32.031450 kernel: Policy zone: Normal Sep 9 04:54:32.031455 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 04:54:32.031459 kernel: software IO TLB: area num 2. Sep 9 04:54:32.031463 kernel: software IO TLB: mapped [mem 0x0000000036280000-0x000000003a280000] (64MB) Sep 9 04:54:32.031468 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 9 04:54:32.031472 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 04:54:32.031477 kernel: rcu: RCU event tracing is enabled. Sep 9 04:54:32.031482 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 9 04:54:32.031487 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 04:54:32.031491 kernel: Tracing variant of Tasks RCU enabled. Sep 9 04:54:32.031495 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 04:54:32.031500 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 9 04:54:32.031504 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 04:54:32.031509 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 04:54:32.031513 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 04:54:32.031517 kernel: GICv3: 960 SPIs implemented Sep 9 04:54:32.031521 kernel: GICv3: 0 Extended SPIs implemented Sep 9 04:54:32.031526 kernel: Root IRQ handler: gic_handle_irq Sep 9 04:54:32.031530 kernel: GICv3: GICv3 features: 16 PPIs, RSS Sep 9 04:54:32.031535 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Sep 9 04:54:32.031540 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 9 04:54:32.031544 kernel: ITS: No ITS available, not enabling LPIs Sep 9 04:54:32.031548 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 04:54:32.031553 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Sep 9 04:54:32.031557 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 04:54:32.031562 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Sep 9 04:54:32.031566 kernel: Console: colour dummy device 80x25 Sep 9 04:54:32.031571 kernel: printk: legacy console [tty1] enabled Sep 9 04:54:32.031575 kernel: ACPI: Core revision 20240827 Sep 9 04:54:32.031580 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Sep 9 04:54:32.031585 kernel: pid_max: default: 32768 minimum: 301 Sep 9 04:54:32.031590 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 04:54:32.031594 kernel: landlock: Up and running. Sep 9 04:54:32.031599 kernel: SELinux: Initializing. Sep 9 04:54:32.031603 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 04:54:32.031611 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 04:54:32.031617 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 Sep 9 04:54:32.031622 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Sep 9 04:54:32.031626 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 9 04:54:32.031631 kernel: rcu: Hierarchical SRCU implementation. Sep 9 04:54:32.031636 kernel: rcu: Max phase no-delay instances is 400. Sep 9 04:54:32.031641 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 04:54:32.031646 kernel: Remapping and enabling EFI services. Sep 9 04:54:32.031651 kernel: smp: Bringing up secondary CPUs ... Sep 9 04:54:32.031655 kernel: Detected PIPT I-cache on CPU1 Sep 9 04:54:32.031660 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 9 04:54:32.031666 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Sep 9 04:54:32.031670 kernel: smp: Brought up 1 node, 2 CPUs Sep 9 04:54:32.031675 kernel: SMP: Total of 2 processors activated. Sep 9 04:54:32.031680 kernel: CPU: All CPU(s) started at EL1 Sep 9 04:54:32.031684 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 04:54:32.031689 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 9 04:54:32.031694 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 9 04:54:32.031699 kernel: CPU features: detected: Common not Private translations Sep 9 04:54:32.031704 kernel: CPU features: detected: CRC32 instructions Sep 9 04:54:32.031709 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Sep 9 04:54:32.031714 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 9 04:54:32.031719 kernel: CPU features: detected: LSE atomic instructions Sep 9 04:54:32.031723 kernel: CPU features: detected: Privileged Access Never Sep 9 04:54:32.031728 kernel: CPU features: detected: Speculation barrier (SB) Sep 9 04:54:32.031733 kernel: CPU features: detected: TLB range maintenance instructions Sep 9 04:54:32.031738 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 9 04:54:32.031742 kernel: CPU features: detected: Scalable Vector Extension Sep 9 04:54:32.031747 kernel: alternatives: applying system-wide alternatives Sep 9 04:54:32.031752 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Sep 9 04:54:32.031757 kernel: SVE: maximum available vector length 16 bytes per vector Sep 9 04:54:32.031762 kernel: SVE: default vector length 16 bytes per vector Sep 9 04:54:32.031767 kernel: Memory: 3959604K/4194160K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38976K init, 1038K bss, 213368K reserved, 16384K cma-reserved) Sep 9 04:54:32.031772 kernel: devtmpfs: initialized Sep 9 04:54:32.031776 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 04:54:32.031781 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 9 04:54:32.031786 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 9 04:54:32.031790 kernel: 0 pages in range for non-PLT usage Sep 9 04:54:32.031796 kernel: 508560 pages in range for PLT usage Sep 9 04:54:32.031801 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 04:54:32.031805 kernel: SMBIOS 3.1.0 present. Sep 9 04:54:32.031810 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 9 04:54:32.031815 kernel: DMI: Memory slots populated: 2/2 Sep 9 04:54:32.031819 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 04:54:32.031833 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 04:54:32.031838 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 04:54:32.031843 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 04:54:32.031849 kernel: audit: initializing netlink subsys (disabled) Sep 9 04:54:32.031854 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Sep 9 04:54:32.031858 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 04:54:32.031863 kernel: cpuidle: using governor menu Sep 9 04:54:32.031868 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 04:54:32.031872 kernel: ASID allocator initialised with 32768 entries Sep 9 04:54:32.031877 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 04:54:32.031882 kernel: Serial: AMBA PL011 UART driver Sep 9 04:54:32.031886 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 04:54:32.031892 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 04:54:32.031897 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 04:54:32.031901 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 9 04:54:32.031906 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 04:54:32.031911 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 04:54:32.031916 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 04:54:32.031920 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 9 04:54:32.031925 kernel: ACPI: Added _OSI(Module Device) Sep 9 04:54:32.031930 kernel: ACPI: Added _OSI(Processor Device) Sep 9 04:54:32.031935 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 04:54:32.031940 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 04:54:32.031944 kernel: ACPI: Interpreter enabled Sep 9 04:54:32.031949 kernel: ACPI: Using GIC for interrupt routing Sep 9 04:54:32.031954 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 9 04:54:32.031959 kernel: printk: legacy console [ttyAMA0] enabled Sep 9 04:54:32.031963 kernel: printk: legacy bootconsole [pl11] disabled Sep 9 04:54:32.031968 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 9 04:54:32.031973 kernel: ACPI: CPU0 has been hot-added Sep 9 04:54:32.031978 kernel: ACPI: CPU1 has been hot-added Sep 9 04:54:32.031983 kernel: iommu: Default domain type: Translated Sep 9 04:54:32.031987 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 04:54:32.031992 kernel: efivars: Registered efivars operations Sep 9 04:54:32.031997 kernel: vgaarb: loaded Sep 9 04:54:32.032001 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 04:54:32.032006 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 04:54:32.032011 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 04:54:32.032016 kernel: pnp: PnP ACPI init Sep 9 04:54:32.032021 kernel: pnp: PnP ACPI: found 0 devices Sep 9 04:54:32.032026 kernel: NET: Registered PF_INET protocol family Sep 9 04:54:32.032030 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 04:54:32.032035 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 04:54:32.032040 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 04:54:32.032045 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 04:54:32.032049 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 04:54:32.032054 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 04:54:32.032059 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 04:54:32.032064 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 04:54:32.032069 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 04:54:32.032074 kernel: PCI: CLS 0 bytes, default 64 Sep 9 04:54:32.032078 kernel: kvm [1]: HYP mode not available Sep 9 04:54:32.032083 kernel: Initialise system trusted keyrings Sep 9 04:54:32.032088 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 04:54:32.032092 kernel: Key type asymmetric registered Sep 9 04:54:32.032097 kernel: Asymmetric key parser 'x509' registered Sep 9 04:54:32.032102 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 9 04:54:32.032107 kernel: io scheduler mq-deadline registered Sep 9 04:54:32.032112 kernel: io scheduler kyber registered Sep 9 04:54:32.032117 kernel: io scheduler bfq registered Sep 9 04:54:32.032121 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 04:54:32.032126 kernel: thunder_xcv, ver 1.0 Sep 9 04:54:32.032131 kernel: thunder_bgx, ver 1.0 Sep 9 04:54:32.032135 kernel: nicpf, ver 1.0 Sep 9 04:54:32.032140 kernel: nicvf, ver 1.0 Sep 9 04:54:32.032247 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 04:54:32.032298 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T04:54:31 UTC (1757393671) Sep 9 04:54:32.032304 kernel: efifb: probing for efifb Sep 9 04:54:32.032309 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 9 04:54:32.032314 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 9 04:54:32.032318 kernel: efifb: scrolling: redraw Sep 9 04:54:32.032323 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 9 04:54:32.032328 kernel: Console: switching to colour frame buffer device 128x48 Sep 9 04:54:32.032333 kernel: fb0: EFI VGA frame buffer device Sep 9 04:54:32.032338 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 9 04:54:32.032343 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 04:54:32.032348 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 9 04:54:32.032353 kernel: watchdog: NMI not fully supported Sep 9 04:54:32.032357 kernel: watchdog: Hard watchdog permanently disabled Sep 9 04:54:32.032362 kernel: NET: Registered PF_INET6 protocol family Sep 9 04:54:32.032367 kernel: Segment Routing with IPv6 Sep 9 04:54:32.032371 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 04:54:32.032376 kernel: NET: Registered PF_PACKET protocol family Sep 9 04:54:32.032382 kernel: Key type dns_resolver registered Sep 9 04:54:32.032386 kernel: registered taskstats version 1 Sep 9 04:54:32.032391 kernel: Loading compiled-in X.509 certificates Sep 9 04:54:32.032396 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 44d1e8b5c5ffbaa3cedd99c03d41580671fabec5' Sep 9 04:54:32.032400 kernel: Demotion targets for Node 0: null Sep 9 04:54:32.032405 kernel: Key type .fscrypt registered Sep 9 04:54:32.032410 kernel: Key type fscrypt-provisioning registered Sep 9 04:54:32.032414 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 04:54:32.032419 kernel: ima: Allocated hash algorithm: sha1 Sep 9 04:54:32.032425 kernel: ima: No architecture policies found Sep 9 04:54:32.032430 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 04:54:32.032434 kernel: clk: Disabling unused clocks Sep 9 04:54:32.032439 kernel: PM: genpd: Disabling unused power domains Sep 9 04:54:32.032444 kernel: Warning: unable to open an initial console. Sep 9 04:54:32.032449 kernel: Freeing unused kernel memory: 38976K Sep 9 04:54:32.032454 kernel: Run /init as init process Sep 9 04:54:32.032458 kernel: with arguments: Sep 9 04:54:32.032463 kernel: /init Sep 9 04:54:32.032468 kernel: with environment: Sep 9 04:54:32.032472 kernel: HOME=/ Sep 9 04:54:32.032477 kernel: TERM=linux Sep 9 04:54:32.032482 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 04:54:32.032487 systemd[1]: Successfully made /usr/ read-only. Sep 9 04:54:32.032494 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 04:54:32.032500 systemd[1]: Detected virtualization microsoft. Sep 9 04:54:32.032506 systemd[1]: Detected architecture arm64. Sep 9 04:54:32.032511 systemd[1]: Running in initrd. Sep 9 04:54:32.032516 systemd[1]: No hostname configured, using default hostname. Sep 9 04:54:32.032521 systemd[1]: Hostname set to . Sep 9 04:54:32.032526 systemd[1]: Initializing machine ID from random generator. Sep 9 04:54:32.032531 systemd[1]: Queued start job for default target initrd.target. Sep 9 04:54:32.032536 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 04:54:32.032542 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 04:54:32.032547 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 04:54:32.032554 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 04:54:32.032559 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 04:54:32.032565 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 04:54:32.032571 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 04:54:32.032576 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 04:54:32.032581 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 04:54:32.032587 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 04:54:32.032592 systemd[1]: Reached target paths.target - Path Units. Sep 9 04:54:32.032597 systemd[1]: Reached target slices.target - Slice Units. Sep 9 04:54:32.032603 systemd[1]: Reached target swap.target - Swaps. Sep 9 04:54:32.032608 systemd[1]: Reached target timers.target - Timer Units. Sep 9 04:54:32.032613 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 04:54:32.032618 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 04:54:32.032623 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 04:54:32.032629 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 04:54:32.032634 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 04:54:32.032640 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 04:54:32.032645 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 04:54:32.032650 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 04:54:32.032655 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 04:54:32.032661 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 04:54:32.032666 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 04:54:32.032671 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 04:54:32.032677 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 04:54:32.032683 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 04:54:32.032688 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 04:54:32.032703 systemd-journald[224]: Collecting audit messages is disabled. Sep 9 04:54:32.032718 systemd-journald[224]: Journal started Sep 9 04:54:32.032731 systemd-journald[224]: Runtime Journal (/run/log/journal/83d4e124ab264079851d91ea1aac6bb3) is 8M, max 78.5M, 70.5M free. Sep 9 04:54:32.035858 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 04:54:32.040726 systemd-modules-load[226]: Inserted module 'overlay' Sep 9 04:54:32.066275 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 04:54:32.066335 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 04:54:32.069615 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 04:54:32.078652 kernel: Bridge firewalling registered Sep 9 04:54:32.073811 systemd-modules-load[226]: Inserted module 'br_netfilter' Sep 9 04:54:32.074629 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 04:54:32.088306 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 04:54:32.096899 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 04:54:32.103449 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 04:54:32.113982 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 04:54:32.128960 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 04:54:32.141989 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 04:54:32.156982 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 04:54:32.171543 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 04:54:32.176727 systemd-tmpfiles[247]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 04:54:32.179952 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 04:54:32.187187 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 04:54:32.195639 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 04:54:32.207327 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 04:54:32.231966 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 04:54:32.241882 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 04:54:32.255008 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 04:54:32.266852 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=1e9320fd787e27d01e3b8a1acb67e0c640346112c469b7a652e9dcfc9271bf90 Sep 9 04:54:32.307234 systemd-resolved[264]: Positive Trust Anchors: Sep 9 04:54:32.307248 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 04:54:32.307268 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 04:54:32.309241 systemd-resolved[264]: Defaulting to hostname 'linux'. Sep 9 04:54:32.310421 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 04:54:32.315801 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 04:54:32.380831 kernel: SCSI subsystem initialized Sep 9 04:54:32.385841 kernel: Loading iSCSI transport class v2.0-870. Sep 9 04:54:32.393855 kernel: iscsi: registered transport (tcp) Sep 9 04:54:32.406411 kernel: iscsi: registered transport (qla4xxx) Sep 9 04:54:32.406447 kernel: QLogic iSCSI HBA Driver Sep 9 04:54:32.419580 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 04:54:32.442279 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 04:54:32.452803 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 04:54:32.495162 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 04:54:32.504093 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 04:54:32.562835 kernel: raid6: neonx8 gen() 18550 MB/s Sep 9 04:54:32.575828 kernel: raid6: neonx4 gen() 18550 MB/s Sep 9 04:54:32.594829 kernel: raid6: neonx2 gen() 17090 MB/s Sep 9 04:54:32.614829 kernel: raid6: neonx1 gen() 15010 MB/s Sep 9 04:54:32.633829 kernel: raid6: int64x8 gen() 10545 MB/s Sep 9 04:54:32.652932 kernel: raid6: int64x4 gen() 10620 MB/s Sep 9 04:54:32.671926 kernel: raid6: int64x2 gen() 8979 MB/s Sep 9 04:54:32.693127 kernel: raid6: int64x1 gen() 7009 MB/s Sep 9 04:54:32.693200 kernel: raid6: using algorithm neonx8 gen() 18550 MB/s Sep 9 04:54:32.714843 kernel: raid6: .... xor() 14917 MB/s, rmw enabled Sep 9 04:54:32.714920 kernel: raid6: using neon recovery algorithm Sep 9 04:54:32.722488 kernel: xor: measuring software checksum speed Sep 9 04:54:32.722495 kernel: 8regs : 28601 MB/sec Sep 9 04:54:32.726237 kernel: 32regs : 28830 MB/sec Sep 9 04:54:32.730056 kernel: arm64_neon : 37656 MB/sec Sep 9 04:54:32.733011 kernel: xor: using function: arm64_neon (37656 MB/sec) Sep 9 04:54:32.770860 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 04:54:32.776118 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 04:54:32.784962 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 04:54:32.808344 systemd-udevd[475]: Using default interface naming scheme 'v255'. Sep 9 04:54:32.812034 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 04:54:32.823151 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 04:54:32.849354 dracut-pre-trigger[486]: rd.md=0: removing MD RAID activation Sep 9 04:54:32.870589 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 04:54:32.875569 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 04:54:32.921304 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 04:54:32.931756 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 04:54:32.999845 kernel: hv_vmbus: Vmbus version:5.3 Sep 9 04:54:33.009866 kernel: hv_vmbus: registering driver hid_hyperv Sep 9 04:54:33.009899 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 9 04:54:33.010426 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 04:54:33.060789 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 9 04:54:33.060806 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 9 04:54:33.060815 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 9 04:54:33.060830 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 9 04:54:33.060838 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 9 04:54:33.060955 kernel: hv_vmbus: registering driver hv_netvsc Sep 9 04:54:33.060962 kernel: hv_vmbus: registering driver hv_storvsc Sep 9 04:54:33.060968 kernel: PTP clock support registered Sep 9 04:54:33.010522 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 04:54:33.050107 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 04:54:33.086428 kernel: scsi host0: storvsc_host_t Sep 9 04:54:33.086554 kernel: scsi host1: storvsc_host_t Sep 9 04:54:33.086570 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 9 04:54:33.054608 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 04:54:33.098879 kernel: hv_utils: Registering HyperV Utility Driver Sep 9 04:54:33.098899 kernel: hv_vmbus: registering driver hv_utils Sep 9 04:54:33.068789 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 04:54:33.463899 kernel: hv_utils: Heartbeat IC version 3.0 Sep 9 04:54:33.463922 kernel: hv_utils: Shutdown IC version 3.2 Sep 9 04:54:33.463929 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Sep 9 04:54:33.463961 kernel: hv_utils: TimeSync IC version 4.0 Sep 9 04:54:33.463968 kernel: hv_netvsc 00224877-2b01-0022-4877-2b0100224877 eth0: VF slot 1 added Sep 9 04:54:33.079819 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 04:54:33.079897 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 04:54:33.097957 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 04:54:33.463680 systemd-resolved[264]: Clock change detected. Flushing caches. Sep 9 04:54:33.497006 kernel: hv_vmbus: registering driver hv_pci Sep 9 04:54:33.497042 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 9 04:54:33.501608 kernel: hv_pci 9bf97c50-25e5-458b-a775-b53ae83be2f6: PCI VMBus probing: Using version 0x10004 Sep 9 04:54:33.501747 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 9 04:54:33.508075 kernel: hv_pci 9bf97c50-25e5-458b-a775-b53ae83be2f6: PCI host bridge to bus 25e5:00 Sep 9 04:54:33.508205 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 9 04:54:33.514188 kernel: pci_bus 25e5:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 9 04:54:33.514326 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 9 04:54:33.518982 kernel: pci_bus 25e5:00: No busn resource found for root bus, will use [bus 00-ff] Sep 9 04:54:33.519089 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 9 04:54:33.526964 kernel: pci 25e5:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Sep 9 04:54:33.527002 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 9 04:54:33.535747 kernel: pci 25e5:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 9 04:54:33.535770 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#268 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 9 04:54:33.545262 kernel: pci 25e5:00:02.0: enabling Extended Tags Sep 9 04:54:33.546245 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 04:54:33.560008 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 04:54:33.564012 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 9 04:54:33.564143 kernel: pci 25e5:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 25e5:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Sep 9 04:54:33.581350 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 9 04:54:33.581500 kernel: pci_bus 25e5:00: busn_res: [bus 00-ff] end is updated to 00 Sep 9 04:54:33.581577 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 04:54:33.581585 kernel: pci 25e5:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Sep 9 04:54:33.586010 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 9 04:54:33.606016 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#308 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 9 04:54:33.627120 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 9 04:54:33.651127 kernel: mlx5_core 25e5:00:02.0: enabling device (0000 -> 0002) Sep 9 04:54:33.659378 kernel: mlx5_core 25e5:00:02.0: PTM is not supported by PCIe Sep 9 04:54:33.659501 kernel: mlx5_core 25e5:00:02.0: firmware version: 16.30.5006 Sep 9 04:54:33.827350 kernel: hv_netvsc 00224877-2b01-0022-4877-2b0100224877 eth0: VF registering: eth1 Sep 9 04:54:33.827534 kernel: mlx5_core 25e5:00:02.0 eth1: joined to eth0 Sep 9 04:54:33.833029 kernel: mlx5_core 25e5:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Sep 9 04:54:33.842047 kernel: mlx5_core 25e5:00:02.0 enP9701s1: renamed from eth1 Sep 9 04:54:34.110885 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 9 04:54:34.152174 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 9 04:54:34.169529 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 9 04:54:34.185402 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 9 04:54:34.190518 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 9 04:54:34.203682 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 04:54:34.216194 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 04:54:34.221083 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 04:54:34.229526 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 04:54:34.238092 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 04:54:34.265477 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 04:54:34.284009 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#217 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 9 04:54:34.291339 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 04:54:34.303699 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 04:54:35.313371 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 9 04:54:35.328056 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 04:54:35.328098 disk-uuid[657]: The operation has completed successfully. Sep 9 04:54:35.401530 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 04:54:35.401627 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 04:54:35.420150 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 04:54:35.441296 sh[822]: Success Sep 9 04:54:35.472940 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 04:54:35.472986 kernel: device-mapper: uevent: version 1.0.3 Sep 9 04:54:35.477577 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 04:54:35.486023 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 9 04:54:35.801930 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 04:54:35.806894 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 04:54:35.824877 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 04:54:35.850801 kernel: BTRFS: device fsid 72a0ff35-b4e8-4772-9a8d-d0e90c3fb364 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (840) Sep 9 04:54:35.850844 kernel: BTRFS info (device dm-0): first mount of filesystem 72a0ff35-b4e8-4772-9a8d-d0e90c3fb364 Sep 9 04:54:35.855560 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 9 04:54:36.223766 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 04:54:36.223844 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 04:54:36.262687 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 04:54:36.266985 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 04:54:36.273719 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 04:54:36.274502 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 04:54:36.294612 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 04:54:36.322030 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (865) Sep 9 04:54:36.331343 kernel: BTRFS info (device sda6): first mount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 04:54:36.331380 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 04:54:36.378408 kernel: BTRFS info (device sda6): turning on async discard Sep 9 04:54:36.378474 kernel: BTRFS info (device sda6): enabling free space tree Sep 9 04:54:36.387017 kernel: BTRFS info (device sda6): last unmount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 04:54:36.387778 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 04:54:36.397398 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 04:54:36.416386 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 04:54:36.426377 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 04:54:36.460424 systemd-networkd[1009]: lo: Link UP Sep 9 04:54:36.460434 systemd-networkd[1009]: lo: Gained carrier Sep 9 04:54:36.461137 systemd-networkd[1009]: Enumeration completed Sep 9 04:54:36.463260 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 04:54:36.467440 systemd-networkd[1009]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 04:54:36.467443 systemd-networkd[1009]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 04:54:36.467780 systemd[1]: Reached target network.target - Network. Sep 9 04:54:36.539005 kernel: mlx5_core 25e5:00:02.0 enP9701s1: Link up Sep 9 04:54:36.543016 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 9 04:54:36.577501 systemd-networkd[1009]: enP9701s1: Link UP Sep 9 04:54:36.580626 kernel: hv_netvsc 00224877-2b01-0022-4877-2b0100224877 eth0: Data path switched to VF: enP9701s1 Sep 9 04:54:36.577568 systemd-networkd[1009]: eth0: Link UP Sep 9 04:54:36.577634 systemd-networkd[1009]: eth0: Gained carrier Sep 9 04:54:36.577649 systemd-networkd[1009]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 04:54:36.594215 systemd-networkd[1009]: enP9701s1: Gained carrier Sep 9 04:54:36.605022 systemd-networkd[1009]: eth0: DHCPv4 address 10.200.20.39/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 9 04:54:37.631082 ignition[994]: Ignition 2.22.0 Sep 9 04:54:37.631091 ignition[994]: Stage: fetch-offline Sep 9 04:54:37.634396 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 04:54:37.631187 ignition[994]: no configs at "/usr/lib/ignition/base.d" Sep 9 04:54:37.637472 systemd-networkd[1009]: eth0: Gained IPv6LL Sep 9 04:54:37.631193 ignition[994]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 04:54:37.646073 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 9 04:54:37.631274 ignition[994]: parsed url from cmdline: "" Sep 9 04:54:37.631276 ignition[994]: no config URL provided Sep 9 04:54:37.631279 ignition[994]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 04:54:37.631283 ignition[994]: no config at "/usr/lib/ignition/user.ign" Sep 9 04:54:37.631287 ignition[994]: failed to fetch config: resource requires networking Sep 9 04:54:37.631564 ignition[994]: Ignition finished successfully Sep 9 04:54:37.683261 ignition[1019]: Ignition 2.22.0 Sep 9 04:54:37.683275 ignition[1019]: Stage: fetch Sep 9 04:54:37.683450 ignition[1019]: no configs at "/usr/lib/ignition/base.d" Sep 9 04:54:37.683459 ignition[1019]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 04:54:37.683522 ignition[1019]: parsed url from cmdline: "" Sep 9 04:54:37.683525 ignition[1019]: no config URL provided Sep 9 04:54:37.683528 ignition[1019]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 04:54:37.683533 ignition[1019]: no config at "/usr/lib/ignition/user.ign" Sep 9 04:54:37.683549 ignition[1019]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 9 04:54:37.754117 ignition[1019]: GET result: OK Sep 9 04:54:37.756261 ignition[1019]: config has been read from IMDS userdata Sep 9 04:54:37.756293 ignition[1019]: parsing config with SHA512: 13c19aefab2579d300ed691a5fd4fccb0cec9333a443bde4bdbb92dfde1c6bc69ce4e71a1f41eb899092073d2890922eea38b7eac409b5d234a8935778b9522c Sep 9 04:54:37.759883 unknown[1019]: fetched base config from "system" Sep 9 04:54:37.759901 unknown[1019]: fetched base config from "system" Sep 9 04:54:37.760403 ignition[1019]: fetch: fetch complete Sep 9 04:54:37.759905 unknown[1019]: fetched user config from "azure" Sep 9 04:54:37.760410 ignition[1019]: fetch: fetch passed Sep 9 04:54:37.764516 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 9 04:54:37.760499 ignition[1019]: Ignition finished successfully Sep 9 04:54:37.771749 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 04:54:37.807369 ignition[1025]: Ignition 2.22.0 Sep 9 04:54:37.809778 ignition[1025]: Stage: kargs Sep 9 04:54:37.809986 ignition[1025]: no configs at "/usr/lib/ignition/base.d" Sep 9 04:54:37.813547 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 04:54:37.810010 ignition[1025]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 04:54:37.821096 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 04:54:37.810555 ignition[1025]: kargs: kargs passed Sep 9 04:54:37.810600 ignition[1025]: Ignition finished successfully Sep 9 04:54:37.852599 ignition[1031]: Ignition 2.22.0 Sep 9 04:54:37.854836 ignition[1031]: Stage: disks Sep 9 04:54:37.855052 ignition[1031]: no configs at "/usr/lib/ignition/base.d" Sep 9 04:54:37.858071 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 04:54:37.855061 ignition[1031]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 04:54:37.863546 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 04:54:37.855586 ignition[1031]: disks: disks passed Sep 9 04:54:37.869532 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 04:54:37.855638 ignition[1031]: Ignition finished successfully Sep 9 04:54:37.877867 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 04:54:37.885643 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 04:54:37.893328 systemd[1]: Reached target basic.target - Basic System. Sep 9 04:54:37.899783 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 04:54:37.976031 systemd-fsck[1039]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Sep 9 04:54:37.984382 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 04:54:37.989860 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 04:54:40.009006 kernel: EXT4-fs (sda9): mounted filesystem 88574756-967d-44b3-be66-46689c8baf27 r/w with ordered data mode. Quota mode: none. Sep 9 04:54:40.009335 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 04:54:40.012770 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 04:54:40.047336 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 04:54:40.063637 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 04:54:40.071300 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 9 04:54:40.077247 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 04:54:40.077275 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 04:54:40.086211 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 04:54:40.101130 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 04:54:40.121011 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1053) Sep 9 04:54:40.130051 kernel: BTRFS info (device sda6): first mount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 04:54:40.130085 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 04:54:40.139111 kernel: BTRFS info (device sda6): turning on async discard Sep 9 04:54:40.139127 kernel: BTRFS info (device sda6): enabling free space tree Sep 9 04:54:40.141108 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 04:54:40.695545 coreos-metadata[1055]: Sep 09 04:54:40.695 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 9 04:54:40.701891 coreos-metadata[1055]: Sep 09 04:54:40.700 INFO Fetch successful Sep 9 04:54:40.701891 coreos-metadata[1055]: Sep 09 04:54:40.700 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 9 04:54:40.713885 coreos-metadata[1055]: Sep 09 04:54:40.708 INFO Fetch successful Sep 9 04:54:40.713885 coreos-metadata[1055]: Sep 09 04:54:40.708 INFO wrote hostname ci-4452.0.0-n-087888047c to /sysroot/etc/hostname Sep 9 04:54:40.714289 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 9 04:54:40.873475 initrd-setup-root[1083]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 04:54:40.933740 initrd-setup-root[1090]: cut: /sysroot/etc/group: No such file or directory Sep 9 04:54:40.958213 initrd-setup-root[1097]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 04:54:40.963145 initrd-setup-root[1104]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 04:54:42.128008 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 04:54:42.133771 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 04:54:42.161530 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 04:54:42.174697 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 04:54:42.182307 kernel: BTRFS info (device sda6): last unmount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 04:54:42.202839 ignition[1172]: INFO : Ignition 2.22.0 Sep 9 04:54:42.202839 ignition[1172]: INFO : Stage: mount Sep 9 04:54:42.210780 ignition[1172]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 04:54:42.210780 ignition[1172]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 04:54:42.210780 ignition[1172]: INFO : mount: mount passed Sep 9 04:54:42.210780 ignition[1172]: INFO : Ignition finished successfully Sep 9 04:54:42.214830 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 04:54:42.223418 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 04:54:42.231701 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 04:54:42.253454 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 04:54:42.285306 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1183) Sep 9 04:54:42.285356 kernel: BTRFS info (device sda6): first mount of filesystem ea68277c-dabb-41e9-9258-b2fe475f0ae6 Sep 9 04:54:42.289232 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 04:54:42.298591 kernel: BTRFS info (device sda6): turning on async discard Sep 9 04:54:42.298636 kernel: BTRFS info (device sda6): enabling free space tree Sep 9 04:54:42.300330 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 04:54:42.324173 ignition[1201]: INFO : Ignition 2.22.0 Sep 9 04:54:42.324173 ignition[1201]: INFO : Stage: files Sep 9 04:54:42.329981 ignition[1201]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 04:54:42.329981 ignition[1201]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 04:54:42.329981 ignition[1201]: DEBUG : files: compiled without relabeling support, skipping Sep 9 04:54:42.356861 ignition[1201]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 04:54:42.356861 ignition[1201]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 04:54:42.437034 ignition[1201]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 04:54:42.442210 ignition[1201]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 04:54:42.442210 ignition[1201]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 04:54:42.438359 unknown[1201]: wrote ssh authorized keys file for user: core Sep 9 04:54:42.491695 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 9 04:54:42.499078 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 9 04:54:42.566501 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 04:54:43.001153 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 9 04:54:43.008834 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 9 04:54:43.008834 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 04:54:43.008834 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 04:54:43.008834 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 04:54:43.008834 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 04:54:43.008834 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 04:54:43.008834 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 04:54:43.008834 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 04:54:43.061454 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 04:54:43.061454 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 04:54:43.061454 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 04:54:43.061454 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 04:54:43.061454 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 04:54:43.061454 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 9 04:54:43.426128 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 9 04:54:43.699212 ignition[1201]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 04:54:43.699212 ignition[1201]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 9 04:54:43.758763 ignition[1201]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 04:54:43.771650 ignition[1201]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 04:54:43.771650 ignition[1201]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 9 04:54:43.784624 ignition[1201]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 9 04:54:43.784624 ignition[1201]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 04:54:43.784624 ignition[1201]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 04:54:43.784624 ignition[1201]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 04:54:43.784624 ignition[1201]: INFO : files: files passed Sep 9 04:54:43.784624 ignition[1201]: INFO : Ignition finished successfully Sep 9 04:54:43.779743 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 04:54:43.791127 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 04:54:43.818864 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 04:54:43.827013 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 04:54:43.827088 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 04:54:43.859015 initrd-setup-root-after-ignition[1230]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 04:54:43.859015 initrd-setup-root-after-ignition[1230]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 04:54:43.872553 initrd-setup-root-after-ignition[1234]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 04:54:43.878456 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 04:54:43.887943 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 04:54:43.893425 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 04:54:43.940516 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 04:54:43.940625 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 04:54:43.948940 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 04:54:43.957579 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 04:54:43.964876 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 04:54:43.965698 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 04:54:44.009248 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 04:54:44.015804 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 04:54:44.036297 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 04:54:44.041089 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 04:54:44.049784 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 04:54:44.057653 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 04:54:44.057767 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 04:54:44.069183 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 04:54:44.073496 systemd[1]: Stopped target basic.target - Basic System. Sep 9 04:54:44.081059 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 04:54:44.088875 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 04:54:44.096860 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 04:54:44.105055 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 04:54:44.114100 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 04:54:44.122207 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 04:54:44.130978 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 04:54:44.138142 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 04:54:44.147138 systemd[1]: Stopped target swap.target - Swaps. Sep 9 04:54:44.153805 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 04:54:44.153920 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 04:54:44.164508 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 04:54:44.169260 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 04:54:44.177905 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 04:54:44.177982 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 04:54:44.186570 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 04:54:44.186669 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 04:54:44.199777 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 04:54:44.199866 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 04:54:44.205429 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 04:54:44.205499 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 04:54:44.276031 ignition[1254]: INFO : Ignition 2.22.0 Sep 9 04:54:44.276031 ignition[1254]: INFO : Stage: umount Sep 9 04:54:44.276031 ignition[1254]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 04:54:44.276031 ignition[1254]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 9 04:54:44.276031 ignition[1254]: INFO : umount: umount passed Sep 9 04:54:44.276031 ignition[1254]: INFO : Ignition finished successfully Sep 9 04:54:44.213401 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 9 04:54:44.213465 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 9 04:54:44.224792 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 04:54:44.238195 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 04:54:44.238336 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 04:54:44.252282 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 04:54:44.261942 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 04:54:44.262465 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 04:54:44.272439 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 04:54:44.272568 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 04:54:44.284793 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 04:54:44.285797 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 04:54:44.291488 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 04:54:44.291603 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 04:54:44.304065 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 04:54:44.304135 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 04:54:44.310005 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 04:54:44.310048 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 04:54:44.317428 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 9 04:54:44.317459 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 9 04:54:44.325441 systemd[1]: Stopped target network.target - Network. Sep 9 04:54:44.338026 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 04:54:44.338104 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 04:54:44.346442 systemd[1]: Stopped target paths.target - Path Units. Sep 9 04:54:44.355277 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 04:54:44.363045 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 04:54:44.367654 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 04:54:44.375520 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 04:54:44.383370 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 04:54:44.383416 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 04:54:44.391076 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 04:54:44.391138 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 04:54:44.398528 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 04:54:44.398582 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 04:54:44.405908 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 04:54:44.405938 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 04:54:44.413633 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 04:54:44.421452 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 04:54:44.438154 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 04:54:44.438683 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 04:54:44.440915 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 04:54:44.445219 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 04:54:44.445299 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 04:54:44.457695 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 04:54:44.457867 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 04:54:44.457946 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 04:54:44.638371 kernel: hv_netvsc 00224877-2b01-0022-4877-2b0100224877 eth0: Data path switched from VF: enP9701s1 Sep 9 04:54:44.470245 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 04:54:44.472118 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 04:54:44.478791 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 04:54:44.478845 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 04:54:44.486632 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 04:54:44.486693 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 04:54:44.495728 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 04:54:44.511246 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 04:54:44.511327 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 04:54:44.521026 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 04:54:44.521082 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 04:54:44.528244 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 04:54:44.528282 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 04:54:44.532565 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 04:54:44.532599 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 04:54:44.544289 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 04:54:44.552382 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 04:54:44.552444 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 04:54:44.585517 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 04:54:44.587221 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 04:54:44.593643 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 04:54:44.593692 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 04:54:44.601015 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 04:54:44.601043 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 04:54:44.608174 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 04:54:44.608230 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 04:54:44.626271 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 04:54:44.626340 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 04:54:44.638433 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 04:54:44.638482 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 04:54:44.653130 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 04:54:44.667336 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 04:54:44.667399 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 04:54:44.679177 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 04:54:44.679231 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 04:54:44.692549 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 04:54:44.692599 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 04:54:44.702697 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 04:54:44.849686 systemd-journald[224]: Received SIGTERM from PID 1 (systemd). Sep 9 04:54:44.702744 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 04:54:44.702769 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 04:54:44.703028 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 04:54:44.703099 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 04:54:44.714703 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 04:54:44.714831 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 04:54:44.721938 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 04:54:44.730381 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 04:54:44.769061 systemd[1]: Switching root. Sep 9 04:54:44.885624 systemd-journald[224]: Journal stopped Sep 9 04:54:52.382646 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 04:54:52.382664 kernel: SELinux: policy capability open_perms=1 Sep 9 04:54:52.382671 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 04:54:52.382677 kernel: SELinux: policy capability always_check_network=0 Sep 9 04:54:52.382683 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 04:54:52.382689 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 04:54:52.382695 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 04:54:52.382700 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 04:54:52.382705 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 04:54:52.382710 kernel: audit: type=1403 audit(1757393686.237:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 04:54:52.382717 systemd[1]: Successfully loaded SELinux policy in 183.810ms. Sep 9 04:54:52.382725 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.361ms. Sep 9 04:54:52.382731 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 04:54:52.382738 systemd[1]: Detected virtualization microsoft. Sep 9 04:54:52.382744 systemd[1]: Detected architecture arm64. Sep 9 04:54:52.382751 systemd[1]: Detected first boot. Sep 9 04:54:52.382757 systemd[1]: Hostname set to . Sep 9 04:54:52.382763 systemd[1]: Initializing machine ID from random generator. Sep 9 04:54:52.382768 zram_generator::config[1297]: No configuration found. Sep 9 04:54:52.382775 kernel: NET: Registered PF_VSOCK protocol family Sep 9 04:54:52.382780 systemd[1]: Populated /etc with preset unit settings. Sep 9 04:54:52.382788 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 04:54:52.382795 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 04:54:52.382800 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 04:54:52.382806 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 04:54:52.382812 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 04:54:52.382818 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 04:54:52.382824 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 04:54:52.382830 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 04:54:52.382837 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 04:54:52.382843 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 04:54:52.382850 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 04:54:52.382855 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 04:54:52.382862 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 04:54:52.382868 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 04:54:52.382874 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 04:54:52.382880 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 04:54:52.382886 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 04:54:52.382893 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 04:54:52.382899 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 9 04:54:52.382906 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 04:54:52.382913 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 04:54:52.382919 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 04:54:52.382925 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 04:54:52.382931 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 04:54:52.382938 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 04:54:52.382945 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 04:54:52.382951 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 04:54:52.382957 systemd[1]: Reached target slices.target - Slice Units. Sep 9 04:54:52.382963 systemd[1]: Reached target swap.target - Swaps. Sep 9 04:54:52.382969 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 04:54:52.382975 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 04:54:52.382982 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 04:54:52.384982 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 04:54:52.385000 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 04:54:52.385007 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 04:54:52.385014 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 04:54:52.385020 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 04:54:52.385029 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 04:54:52.385035 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 04:54:52.385041 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 04:54:52.385047 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 04:54:52.385053 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 04:54:52.385061 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 04:54:52.385068 systemd[1]: Reached target machines.target - Containers. Sep 9 04:54:52.385074 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 04:54:52.385081 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 04:54:52.385088 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 04:54:52.385094 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 04:54:52.385100 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 04:54:52.385106 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 04:54:52.385112 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 04:54:52.385119 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 04:54:52.385125 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 04:54:52.385131 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 04:54:52.385138 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 04:54:52.385145 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 04:54:52.385151 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 04:54:52.385157 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 04:54:52.385163 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 04:54:52.385177 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 04:54:52.385186 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 04:54:52.385192 kernel: loop: module loaded Sep 9 04:54:52.385199 kernel: fuse: init (API version 7.41) Sep 9 04:54:52.385205 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 04:54:52.385211 kernel: ACPI: bus type drm_connector registered Sep 9 04:54:52.385218 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 04:54:52.385240 systemd-journald[1377]: Collecting audit messages is disabled. Sep 9 04:54:52.385255 systemd-journald[1377]: Journal started Sep 9 04:54:52.385270 systemd-journald[1377]: Runtime Journal (/run/log/journal/e40c2f61148446e7871df03e7ebde627) is 8M, max 78.5M, 70.5M free. Sep 9 04:54:51.646804 systemd[1]: Queued start job for default target multi-user.target. Sep 9 04:54:51.651404 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 9 04:54:51.651769 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 04:54:51.652039 systemd[1]: systemd-journald.service: Consumed 2.230s CPU time. Sep 9 04:54:52.402783 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 04:54:52.416341 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 04:54:52.423242 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 04:54:52.423271 systemd[1]: Stopped verity-setup.service. Sep 9 04:54:52.435758 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 04:54:52.436453 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 04:54:52.440611 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 04:54:52.445348 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 04:54:52.449187 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 04:54:52.453386 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 04:54:52.457763 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 04:54:52.464010 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 04:54:52.468848 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 04:54:52.474907 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 04:54:52.475214 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 04:54:52.480487 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 04:54:52.480697 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 04:54:52.485906 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 04:54:52.486181 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 04:54:52.490965 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 04:54:52.491306 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 04:54:52.496588 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 04:54:52.496792 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 04:54:52.501562 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 04:54:52.501767 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 04:54:52.506953 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 04:54:52.512096 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 04:54:52.517618 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 04:54:52.523131 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 04:54:52.535839 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 04:54:52.542755 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 04:54:52.548053 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 04:54:52.565066 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 04:54:52.570851 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 04:54:52.570925 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 04:54:52.575503 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 04:54:52.581269 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 04:54:52.585587 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 04:54:52.600386 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 04:54:52.613113 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 04:54:52.618092 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 04:54:52.618824 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 04:54:52.623145 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 04:54:52.623811 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 04:54:52.630102 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 04:54:52.635546 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 04:54:52.640508 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 04:54:52.645300 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 04:54:52.662443 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 04:54:52.667813 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 04:54:52.673478 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 04:54:52.705550 systemd-journald[1377]: Time spent on flushing to /var/log/journal/e40c2f61148446e7871df03e7ebde627 is 17.393ms for 942 entries. Sep 9 04:54:52.705550 systemd-journald[1377]: System Journal (/var/log/journal/e40c2f61148446e7871df03e7ebde627) is 8M, max 2.6G, 2.6G free. Sep 9 04:54:52.807448 systemd-journald[1377]: Received client request to flush runtime journal. Sep 9 04:54:52.807512 kernel: loop0: detected capacity change from 0 to 27936 Sep 9 04:54:52.760532 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 04:54:52.808682 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 04:54:52.820509 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 04:54:52.821515 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 04:54:53.214008 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 04:54:53.332734 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 04:54:53.339800 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 04:54:53.362013 kernel: loop1: detected capacity change from 0 to 100632 Sep 9 04:54:53.501259 systemd-tmpfiles[1452]: ACLs are not supported, ignoring. Sep 9 04:54:53.501272 systemd-tmpfiles[1452]: ACLs are not supported, ignoring. Sep 9 04:54:53.504768 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 04:54:53.861031 kernel: loop2: detected capacity change from 0 to 119368 Sep 9 04:54:54.227150 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 04:54:54.233961 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 04:54:54.259555 systemd-udevd[1458]: Using default interface naming scheme 'v255'. Sep 9 04:54:54.309017 kernel: loop3: detected capacity change from 0 to 207008 Sep 9 04:54:54.342011 kernel: loop4: detected capacity change from 0 to 27936 Sep 9 04:54:54.354007 kernel: loop5: detected capacity change from 0 to 100632 Sep 9 04:54:54.366004 kernel: loop6: detected capacity change from 0 to 119368 Sep 9 04:54:54.378008 kernel: loop7: detected capacity change from 0 to 207008 Sep 9 04:54:54.393901 (sd-merge)[1461]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 9 04:54:54.394329 (sd-merge)[1461]: Merged extensions into '/usr'. Sep 9 04:54:54.397438 systemd[1]: Reload requested from client PID 1436 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 04:54:54.397538 systemd[1]: Reloading... Sep 9 04:54:54.447021 zram_generator::config[1489]: No configuration found. Sep 9 04:54:54.613343 systemd[1]: Reloading finished in 215 ms. Sep 9 04:54:54.640021 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 04:54:54.648846 systemd[1]: Starting ensure-sysext.service... Sep 9 04:54:54.654116 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 04:54:54.697190 systemd[1]: Reload requested from client PID 1542 ('systemctl') (unit ensure-sysext.service)... Sep 9 04:54:54.697202 systemd[1]: Reloading... Sep 9 04:54:54.721149 systemd-tmpfiles[1543]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 04:54:54.721200 systemd-tmpfiles[1543]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 04:54:54.721401 systemd-tmpfiles[1543]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 04:54:54.721541 systemd-tmpfiles[1543]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 04:54:54.721953 systemd-tmpfiles[1543]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 04:54:54.722108 systemd-tmpfiles[1543]: ACLs are not supported, ignoring. Sep 9 04:54:54.722138 systemd-tmpfiles[1543]: ACLs are not supported, ignoring. Sep 9 04:54:54.741104 zram_generator::config[1571]: No configuration found. Sep 9 04:54:54.802961 systemd-tmpfiles[1543]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 04:54:54.802972 systemd-tmpfiles[1543]: Skipping /boot Sep 9 04:54:54.808940 systemd-tmpfiles[1543]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 04:54:54.808954 systemd-tmpfiles[1543]: Skipping /boot Sep 9 04:54:54.872149 systemd[1]: Reloading finished in 174 ms. Sep 9 04:54:54.881522 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 04:54:54.895127 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 04:54:54.913412 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 04:54:54.927151 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 04:54:54.935112 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 04:54:54.941096 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 04:54:54.947828 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 04:54:54.953628 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 04:54:54.965471 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 04:54:54.977050 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 04:54:54.981154 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 04:54:54.981407 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 04:54:54.981931 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 04:54:54.991533 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 04:54:55.013269 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 04:54:55.020475 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 04:54:55.020869 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 04:54:55.028407 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 04:54:55.028536 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 04:54:55.042513 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 04:54:55.058313 systemd[1]: Finished ensure-sysext.service. Sep 9 04:54:55.063896 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Sep 9 04:54:55.068077 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 04:54:55.071183 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 04:54:55.078113 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 04:54:55.095542 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 04:54:55.103643 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 04:54:55.109374 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 04:54:55.109412 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 04:54:55.111746 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 04:54:55.117808 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 04:54:55.124172 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 04:54:55.129447 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 04:54:55.129578 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 04:54:55.134421 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 04:54:55.138424 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 04:54:55.142880 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 04:54:55.143010 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 04:54:55.147838 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 04:54:55.147952 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 04:54:55.153793 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 9 04:54:55.153927 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 04:54:55.153979 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 04:54:55.201256 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 04:54:55.212253 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#24 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 9 04:54:55.223234 augenrules[1720]: No rules Sep 9 04:54:55.224512 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 04:54:55.225178 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 04:54:55.232133 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 04:54:55.246044 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 04:54:55.246104 kernel: hv_vmbus: registering driver hv_balloon Sep 9 04:54:55.254080 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 9 04:54:55.256881 kernel: hv_balloon: Memory hot add disabled on ARM64 Sep 9 04:54:55.279607 kernel: hv_vmbus: registering driver hyperv_fb Sep 9 04:54:55.279659 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 9 04:54:55.284225 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 9 04:54:55.288237 kernel: Console: switching to colour dummy device 80x25 Sep 9 04:54:55.292022 kernel: Console: switching to colour frame buffer device 128x48 Sep 9 04:54:55.306173 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Sep 9 04:54:55.313350 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 04:54:55.326610 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 04:54:55.327244 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 04:54:55.339579 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 04:54:55.351221 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 04:54:55.351371 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 04:54:55.360124 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 04:54:55.432282 systemd-networkd[1687]: lo: Link UP Sep 9 04:54:55.432289 systemd-networkd[1687]: lo: Gained carrier Sep 9 04:54:55.433512 systemd-networkd[1687]: Enumeration completed Sep 9 04:54:55.433667 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 04:54:55.433919 systemd-networkd[1687]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 04:54:55.433982 systemd-networkd[1687]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 04:54:55.439942 systemd-resolved[1631]: Positive Trust Anchors: Sep 9 04:54:55.439953 systemd-resolved[1631]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 04:54:55.439973 systemd-resolved[1631]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 04:54:55.442635 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 04:54:55.451256 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 04:54:55.481981 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 9 04:54:55.489134 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 04:54:55.500460 kernel: mlx5_core 25e5:00:02.0 enP9701s1: Link up Sep 9 04:54:55.500671 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 9 04:54:55.501728 systemd-resolved[1631]: Using system hostname 'ci-4452.0.0-n-087888047c'. Sep 9 04:54:55.525249 kernel: hv_netvsc 00224877-2b01-0022-4877-2b0100224877 eth0: Data path switched to VF: enP9701s1 Sep 9 04:54:55.525555 systemd-networkd[1687]: enP9701s1: Link UP Sep 9 04:54:55.525672 systemd-networkd[1687]: eth0: Link UP Sep 9 04:54:55.525675 systemd-networkd[1687]: eth0: Gained carrier Sep 9 04:54:55.525693 systemd-networkd[1687]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 04:54:55.527006 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 04:54:55.534269 systemd[1]: Reached target network.target - Network. Sep 9 04:54:55.535187 systemd-networkd[1687]: enP9701s1: Gained carrier Sep 9 04:54:55.537774 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 04:54:55.545474 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 04:54:55.551053 systemd-networkd[1687]: eth0: DHCPv4 address 10.200.20.39/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 9 04:54:55.590006 kernel: MACsec IEEE 802.1AE Sep 9 04:54:55.590459 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 04:54:56.919514 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 04:54:57.333407 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 04:54:57.338537 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 04:54:57.541147 systemd-networkd[1687]: eth0: Gained IPv6LL Sep 9 04:54:57.543330 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 04:54:57.548743 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 04:55:00.796266 ldconfig[1431]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 04:55:00.812395 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 04:55:00.818614 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 04:55:00.856158 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 04:55:00.860583 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 04:55:00.864663 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 04:55:00.869406 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 04:55:00.874114 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 04:55:00.877984 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 04:55:00.882809 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 04:55:00.887446 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 04:55:00.887472 systemd[1]: Reached target paths.target - Path Units. Sep 9 04:55:00.890695 systemd[1]: Reached target timers.target - Timer Units. Sep 9 04:55:00.907961 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 04:55:00.913148 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 04:55:00.917881 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 04:55:00.922499 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 04:55:00.927035 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 04:55:00.932468 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 04:55:00.950376 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 04:55:00.955281 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 04:55:00.959366 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 04:55:00.962816 systemd[1]: Reached target basic.target - Basic System. Sep 9 04:55:00.966300 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 04:55:00.966318 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 04:55:00.988474 systemd[1]: Starting chronyd.service - NTP client/server... Sep 9 04:55:01.000082 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 04:55:01.006108 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 9 04:55:01.011903 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 04:55:01.019087 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 04:55:01.027073 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 04:55:01.034782 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 04:55:01.038868 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 04:55:01.046183 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 9 04:55:01.050336 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 9 04:55:01.052077 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:55:01.058507 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 04:55:01.063114 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 04:55:01.067197 jq[1836]: false Sep 9 04:55:01.069707 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 04:55:01.075100 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 04:55:01.081943 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 04:55:01.091026 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 04:55:01.095571 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 04:55:01.096040 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 04:55:01.098983 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 04:55:01.099102 KVP[1838]: KVP starting; pid is:1838 Sep 9 04:55:01.107977 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 04:55:01.109395 KVP[1838]: KVP LIC Version: 3.1 Sep 9 04:55:01.109645 chronyd[1828]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Sep 9 04:55:01.110125 kernel: hv_utils: KVP IC version 4.0 Sep 9 04:55:01.117454 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 04:55:01.119273 jq[1854]: true Sep 9 04:55:01.122655 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 04:55:01.122811 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 04:55:01.129067 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 04:55:01.129239 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 04:55:01.144181 extend-filesystems[1837]: Found /dev/sda6 Sep 9 04:55:01.154833 (ntainerd)[1866]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 04:55:01.158832 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 04:55:01.159075 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 04:55:01.166011 jq[1865]: true Sep 9 04:55:01.169915 extend-filesystems[1837]: Found /dev/sda9 Sep 9 04:55:01.173325 extend-filesystems[1837]: Checking size of /dev/sda9 Sep 9 04:55:01.184258 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 04:55:01.206354 chronyd[1828]: Timezone right/UTC failed leap second check, ignoring Sep 9 04:55:01.206534 chronyd[1828]: Loaded seccomp filter (level 2) Sep 9 04:55:01.206654 systemd[1]: Started chronyd.service - NTP client/server. Sep 9 04:55:01.233291 update_engine[1852]: I20250909 04:55:01.232069 1852 main.cc:92] Flatcar Update Engine starting Sep 9 04:55:01.233531 extend-filesystems[1837]: Old size kept for /dev/sda9 Sep 9 04:55:01.233497 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 04:55:01.233694 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 04:55:01.242386 systemd-logind[1850]: New seat seat0. Sep 9 04:55:01.243019 systemd-logind[1850]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 04:55:01.251395 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 04:55:01.261767 tar[1858]: linux-arm64/LICENSE Sep 9 04:55:01.262168 tar[1858]: linux-arm64/helm Sep 9 04:55:01.275009 bash[1899]: Updated "/home/core/.ssh/authorized_keys" Sep 9 04:55:01.276101 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 04:55:01.283051 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 04:55:01.489403 dbus-daemon[1831]: [system] SELinux support is enabled Sep 9 04:55:01.490592 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 04:55:01.503593 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 04:55:01.504002 dbus-daemon[1831]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 9 04:55:01.504042 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 04:55:01.509274 update_engine[1852]: I20250909 04:55:01.502092 1852 update_check_scheduler.cc:74] Next update check in 7m45s Sep 9 04:55:01.512814 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 04:55:01.512831 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 04:55:01.529422 systemd[1]: Started update-engine.service - Update Engine. Sep 9 04:55:01.543317 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 04:55:01.595439 coreos-metadata[1830]: Sep 09 04:55:01.595 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 9 04:55:01.599532 coreos-metadata[1830]: Sep 09 04:55:01.598 INFO Fetch successful Sep 9 04:55:01.599532 coreos-metadata[1830]: Sep 09 04:55:01.598 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 9 04:55:01.603884 coreos-metadata[1830]: Sep 09 04:55:01.603 INFO Fetch successful Sep 9 04:55:01.606212 coreos-metadata[1830]: Sep 09 04:55:01.606 INFO Fetching http://168.63.129.16/machine/c39959d0-7f85-4c55-9733-97ee1ea44317/fcc4f876%2Da1f7%2D46a9%2D9592%2D56044bf29fe1.%5Fci%2D4452.0.0%2Dn%2D087888047c?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 9 04:55:01.610327 coreos-metadata[1830]: Sep 09 04:55:01.610 INFO Fetch successful Sep 9 04:55:01.610486 coreos-metadata[1830]: Sep 09 04:55:01.610 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 9 04:55:01.620596 coreos-metadata[1830]: Sep 09 04:55:01.620 INFO Fetch successful Sep 9 04:55:01.650382 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 9 04:55:01.660558 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 04:55:01.727747 sshd_keygen[1876]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 04:55:01.744175 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 04:55:01.750149 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 04:55:01.757069 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 9 04:55:01.779236 tar[1858]: linux-arm64/README.md Sep 9 04:55:01.784373 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 04:55:01.784553 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 04:55:01.790980 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 9 04:55:01.802326 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 04:55:01.812030 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 04:55:01.824561 locksmithd[1969]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 04:55:01.833431 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 04:55:01.844702 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 04:55:01.850367 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 9 04:55:01.856162 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 04:55:02.008781 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:55:02.013786 (kubelet)[2015]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 04:55:02.073332 containerd[1866]: time="2025-09-09T04:55:02Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 04:55:02.074210 containerd[1866]: time="2025-09-09T04:55:02.073957052Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 04:55:02.079672 containerd[1866]: time="2025-09-09T04:55:02.079641180Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.752µs" Sep 9 04:55:02.080465 containerd[1866]: time="2025-09-09T04:55:02.079734492Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 04:55:02.080465 containerd[1866]: time="2025-09-09T04:55:02.079757252Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 04:55:02.080465 containerd[1866]: time="2025-09-09T04:55:02.079890420Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 04:55:02.080465 containerd[1866]: time="2025-09-09T04:55:02.079902428Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 04:55:02.080465 containerd[1866]: time="2025-09-09T04:55:02.079919188Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 04:55:02.080465 containerd[1866]: time="2025-09-09T04:55:02.079954212Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 04:55:02.080465 containerd[1866]: time="2025-09-09T04:55:02.079960892Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 04:55:02.080465 containerd[1866]: time="2025-09-09T04:55:02.080134052Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 04:55:02.080465 containerd[1866]: time="2025-09-09T04:55:02.080145124Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 04:55:02.080465 containerd[1866]: time="2025-09-09T04:55:02.080152156Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 04:55:02.080465 containerd[1866]: time="2025-09-09T04:55:02.080157452Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 04:55:02.080465 containerd[1866]: time="2025-09-09T04:55:02.080209260Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 04:55:02.080690 containerd[1866]: time="2025-09-09T04:55:02.080370820Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 04:55:02.080690 containerd[1866]: time="2025-09-09T04:55:02.080392532Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 04:55:02.080690 containerd[1866]: time="2025-09-09T04:55:02.080399828Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 04:55:02.080690 containerd[1866]: time="2025-09-09T04:55:02.080426876Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 04:55:02.080929 containerd[1866]: time="2025-09-09T04:55:02.080907204Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 04:55:02.081082 containerd[1866]: time="2025-09-09T04:55:02.081065724Z" level=info msg="metadata content store policy set" policy=shared Sep 9 04:55:02.098214 containerd[1866]: time="2025-09-09T04:55:02.098188324Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 04:55:02.098331 containerd[1866]: time="2025-09-09T04:55:02.098316948Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 04:55:02.098392 containerd[1866]: time="2025-09-09T04:55:02.098381380Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 04:55:02.098435 containerd[1866]: time="2025-09-09T04:55:02.098424484Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 04:55:02.098512 containerd[1866]: time="2025-09-09T04:55:02.098500668Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 04:55:02.098595 containerd[1866]: time="2025-09-09T04:55:02.098585340Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 04:55:02.098655 containerd[1866]: time="2025-09-09T04:55:02.098639772Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 04:55:02.098710 containerd[1866]: time="2025-09-09T04:55:02.098697340Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 04:55:02.098764 containerd[1866]: time="2025-09-09T04:55:02.098753852Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 04:55:02.098819 containerd[1866]: time="2025-09-09T04:55:02.098802052Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 04:55:02.098863 containerd[1866]: time="2025-09-09T04:55:02.098852564Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 04:55:02.098901 containerd[1866]: time="2025-09-09T04:55:02.098891316Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 04:55:02.099096 containerd[1866]: time="2025-09-09T04:55:02.099073124Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 04:55:02.099166 containerd[1866]: time="2025-09-09T04:55:02.099149188Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 04:55:02.099245 containerd[1866]: time="2025-09-09T04:55:02.099231588Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 04:55:02.099294 containerd[1866]: time="2025-09-09T04:55:02.099278084Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 04:55:02.099343 containerd[1866]: time="2025-09-09T04:55:02.099332420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 04:55:02.099395 containerd[1866]: time="2025-09-09T04:55:02.099379436Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 04:55:02.099446 containerd[1866]: time="2025-09-09T04:55:02.099435084Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 04:55:02.099496 containerd[1866]: time="2025-09-09T04:55:02.099487124Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 04:55:02.099540 containerd[1866]: time="2025-09-09T04:55:02.099531316Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 04:55:02.099592 containerd[1866]: time="2025-09-09T04:55:02.099582284Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 04:55:02.099639 containerd[1866]: time="2025-09-09T04:55:02.099627996Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 04:55:02.099773 containerd[1866]: time="2025-09-09T04:55:02.099729972Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 04:55:02.099846 containerd[1866]: time="2025-09-09T04:55:02.099834636Z" level=info msg="Start snapshots syncer" Sep 9 04:55:02.099924 containerd[1866]: time="2025-09-09T04:55:02.099914140Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 04:55:02.100175 containerd[1866]: time="2025-09-09T04:55:02.100147508Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 04:55:02.100334 containerd[1866]: time="2025-09-09T04:55:02.100320116Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 04:55:02.100477 containerd[1866]: time="2025-09-09T04:55:02.100462452Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 04:55:02.100686 containerd[1866]: time="2025-09-09T04:55:02.100641228Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 04:55:02.100686 containerd[1866]: time="2025-09-09T04:55:02.100662756Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 04:55:02.100686 containerd[1866]: time="2025-09-09T04:55:02.100671220Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 04:55:02.100787 containerd[1866]: time="2025-09-09T04:55:02.100775252Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 04:55:02.100825 containerd[1866]: time="2025-09-09T04:55:02.100816980Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 04:55:02.100870 containerd[1866]: time="2025-09-09T04:55:02.100862132Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 04:55:02.100909 containerd[1866]: time="2025-09-09T04:55:02.100900452Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 04:55:02.100999 containerd[1866]: time="2025-09-09T04:55:02.100966796Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 04:55:02.100999 containerd[1866]: time="2025-09-09T04:55:02.100981668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 04:55:02.101076 containerd[1866]: time="2025-09-09T04:55:02.101062868Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 04:55:02.101209 containerd[1866]: time="2025-09-09T04:55:02.101156540Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 04:55:02.101209 containerd[1866]: time="2025-09-09T04:55:02.101173164Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 04:55:02.101209 containerd[1866]: time="2025-09-09T04:55:02.101179340Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 04:55:02.101209 containerd[1866]: time="2025-09-09T04:55:02.101186036Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 04:55:02.101209 containerd[1866]: time="2025-09-09T04:55:02.101194372Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 04:55:02.101310 containerd[1866]: time="2025-09-09T04:55:02.101203596Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 04:55:02.101356 containerd[1866]: time="2025-09-09T04:55:02.101346348Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 04:55:02.101418 containerd[1866]: time="2025-09-09T04:55:02.101408876Z" level=info msg="runtime interface created" Sep 9 04:55:02.101457 containerd[1866]: time="2025-09-09T04:55:02.101448892Z" level=info msg="created NRI interface" Sep 9 04:55:02.101506 containerd[1866]: time="2025-09-09T04:55:02.101496020Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 04:55:02.101623 containerd[1866]: time="2025-09-09T04:55:02.101535068Z" level=info msg="Connect containerd service" Sep 9 04:55:02.101623 containerd[1866]: time="2025-09-09T04:55:02.101575004Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 04:55:02.102376 containerd[1866]: time="2025-09-09T04:55:02.102357924Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 04:55:02.350876 kubelet[2015]: E0909 04:55:02.350745 2015 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 04:55:02.352576 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 04:55:02.352688 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 04:55:02.354111 systemd[1]: kubelet.service: Consumed 539ms CPU time, 256.4M memory peak. Sep 9 04:55:02.914720 containerd[1866]: time="2025-09-09T04:55:02.914641012Z" level=info msg="Start subscribing containerd event" Sep 9 04:55:02.914827 containerd[1866]: time="2025-09-09T04:55:02.914775388Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 04:55:02.914827 containerd[1866]: time="2025-09-09T04:55:02.914816252Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 04:55:02.914933 containerd[1866]: time="2025-09-09T04:55:02.914889916Z" level=info msg="Start recovering state" Sep 9 04:55:02.915483 containerd[1866]: time="2025-09-09T04:55:02.915137068Z" level=info msg="Start event monitor" Sep 9 04:55:02.915483 containerd[1866]: time="2025-09-09T04:55:02.915160388Z" level=info msg="Start cni network conf syncer for default" Sep 9 04:55:02.915483 containerd[1866]: time="2025-09-09T04:55:02.915167604Z" level=info msg="Start streaming server" Sep 9 04:55:02.915483 containerd[1866]: time="2025-09-09T04:55:02.915177812Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 04:55:02.915483 containerd[1866]: time="2025-09-09T04:55:02.915183356Z" level=info msg="runtime interface starting up..." Sep 9 04:55:02.915483 containerd[1866]: time="2025-09-09T04:55:02.915189292Z" level=info msg="starting plugins..." Sep 9 04:55:02.915483 containerd[1866]: time="2025-09-09T04:55:02.915201420Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 04:55:02.915759 containerd[1866]: time="2025-09-09T04:55:02.915647612Z" level=info msg="containerd successfully booted in 0.842655s" Sep 9 04:55:02.916035 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 04:55:02.922501 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 04:55:02.929057 systemd[1]: Startup finished in 1.577s (kernel) + 14.103s (initrd) + 16.873s (userspace) = 32.555s. Sep 9 04:55:03.518738 login[2008]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Sep 9 04:55:03.532967 login[2009]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:55:03.542295 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 04:55:03.543139 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 04:55:03.548210 systemd-logind[1850]: New session 1 of user core. Sep 9 04:55:03.573016 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 04:55:03.574480 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 04:55:03.600200 (systemd)[2042]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 04:55:03.601909 systemd-logind[1850]: New session c1 of user core. Sep 9 04:55:03.928778 systemd[2042]: Queued start job for default target default.target. Sep 9 04:55:03.934686 systemd[2042]: Created slice app.slice - User Application Slice. Sep 9 04:55:03.934709 systemd[2042]: Reached target paths.target - Paths. Sep 9 04:55:03.934739 systemd[2042]: Reached target timers.target - Timers. Sep 9 04:55:03.935693 systemd[2042]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 04:55:03.942612 systemd[2042]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 04:55:03.942651 systemd[2042]: Reached target sockets.target - Sockets. Sep 9 04:55:03.942680 systemd[2042]: Reached target basic.target - Basic System. Sep 9 04:55:03.942701 systemd[2042]: Reached target default.target - Main User Target. Sep 9 04:55:03.942720 systemd[2042]: Startup finished in 336ms. Sep 9 04:55:03.942929 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 04:55:03.950131 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 04:55:03.981088 waagent[1995]: 2025-09-09T04:55:03.981026Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Sep 9 04:55:03.985065 waagent[1995]: 2025-09-09T04:55:03.985026Z INFO Daemon Daemon OS: flatcar 4452.0.0 Sep 9 04:55:03.988189 waagent[1995]: 2025-09-09T04:55:03.988163Z INFO Daemon Daemon Python: 3.11.13 Sep 9 04:55:03.993008 waagent[1995]: 2025-09-09T04:55:03.992079Z INFO Daemon Daemon Run daemon Sep 9 04:55:03.995184 waagent[1995]: 2025-09-09T04:55:03.995152Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4452.0.0' Sep 9 04:55:04.001472 waagent[1995]: 2025-09-09T04:55:04.001331Z INFO Daemon Daemon Using waagent for provisioning Sep 9 04:55:04.005325 waagent[1995]: 2025-09-09T04:55:04.005286Z INFO Daemon Daemon Activate resource disk Sep 9 04:55:04.009354 waagent[1995]: 2025-09-09T04:55:04.009300Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 9 04:55:04.017698 waagent[1995]: 2025-09-09T04:55:04.017657Z INFO Daemon Daemon Found device: None Sep 9 04:55:04.020928 waagent[1995]: 2025-09-09T04:55:04.020899Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 9 04:55:04.027714 waagent[1995]: 2025-09-09T04:55:04.027663Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 9 04:55:04.035482 waagent[1995]: 2025-09-09T04:55:04.035440Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 9 04:55:04.039446 waagent[1995]: 2025-09-09T04:55:04.039415Z INFO Daemon Daemon Running default provisioning handler Sep 9 04:55:04.048740 waagent[1995]: 2025-09-09T04:55:04.048694Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 9 04:55:04.058465 waagent[1995]: 2025-09-09T04:55:04.058422Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 9 04:55:04.065673 waagent[1995]: 2025-09-09T04:55:04.065643Z INFO Daemon Daemon cloud-init is enabled: False Sep 9 04:55:04.069220 waagent[1995]: 2025-09-09T04:55:04.069197Z INFO Daemon Daemon Copying ovf-env.xml Sep 9 04:55:04.179822 waagent[1995]: 2025-09-09T04:55:04.179693Z INFO Daemon Daemon Successfully mounted dvd Sep 9 04:55:04.204817 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 9 04:55:04.208013 waagent[1995]: 2025-09-09T04:55:04.206616Z INFO Daemon Daemon Detect protocol endpoint Sep 9 04:55:04.210029 waagent[1995]: 2025-09-09T04:55:04.209981Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 9 04:55:04.213833 waagent[1995]: 2025-09-09T04:55:04.213803Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 9 04:55:04.218652 waagent[1995]: 2025-09-09T04:55:04.218626Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 9 04:55:04.222542 waagent[1995]: 2025-09-09T04:55:04.222513Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 9 04:55:04.226404 waagent[1995]: 2025-09-09T04:55:04.226375Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 9 04:55:04.267109 waagent[1995]: 2025-09-09T04:55:04.267072Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 9 04:55:04.271926 waagent[1995]: 2025-09-09T04:55:04.271905Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 9 04:55:04.275625 waagent[1995]: 2025-09-09T04:55:04.275602Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 9 04:55:04.359085 waagent[1995]: 2025-09-09T04:55:04.358983Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 9 04:55:04.363837 waagent[1995]: 2025-09-09T04:55:04.363803Z INFO Daemon Daemon Forcing an update of the goal state. Sep 9 04:55:04.371330 waagent[1995]: 2025-09-09T04:55:04.371294Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 9 04:55:04.424602 waagent[1995]: 2025-09-09T04:55:04.424565Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 9 04:55:04.428842 waagent[1995]: 2025-09-09T04:55:04.428808Z INFO Daemon Sep 9 04:55:04.430884 waagent[1995]: 2025-09-09T04:55:04.430823Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 35be39fe-6d46-4ba2-9b22-85a69bb7657d eTag: 4759667202882487600 source: Fabric] Sep 9 04:55:04.438990 waagent[1995]: 2025-09-09T04:55:04.438957Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 9 04:55:04.443261 waagent[1995]: 2025-09-09T04:55:04.443234Z INFO Daemon Sep 9 04:55:04.445293 waagent[1995]: 2025-09-09T04:55:04.445269Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 9 04:55:04.454047 waagent[1995]: 2025-09-09T04:55:04.454018Z INFO Daemon Daemon Downloading artifacts profile blob Sep 9 04:55:04.512454 waagent[1995]: 2025-09-09T04:55:04.512403Z INFO Daemon Downloaded certificate {'thumbprint': '95AAD099CFB138C4FB03B25D38DDFE008E79E2FD', 'hasPrivateKey': True} Sep 9 04:55:04.519068 login[2008]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:55:04.519656 waagent[1995]: 2025-09-09T04:55:04.519614Z INFO Daemon Fetch goal state completed Sep 9 04:55:04.524675 systemd-logind[1850]: New session 2 of user core. Sep 9 04:55:04.528891 waagent[1995]: 2025-09-09T04:55:04.528857Z INFO Daemon Daemon Starting provisioning Sep 9 04:55:04.532733 waagent[1995]: 2025-09-09T04:55:04.532697Z INFO Daemon Daemon Handle ovf-env.xml. Sep 9 04:55:04.535968 waagent[1995]: 2025-09-09T04:55:04.535943Z INFO Daemon Daemon Set hostname [ci-4452.0.0-n-087888047c] Sep 9 04:55:04.537100 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 04:55:04.556003 waagent[1995]: 2025-09-09T04:55:04.555028Z INFO Daemon Daemon Publish hostname [ci-4452.0.0-n-087888047c] Sep 9 04:55:04.559162 waagent[1995]: 2025-09-09T04:55:04.559126Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 9 04:55:04.563436 waagent[1995]: 2025-09-09T04:55:04.563405Z INFO Daemon Daemon Primary interface is [eth0] Sep 9 04:55:04.572177 systemd-networkd[1687]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 04:55:04.572183 systemd-networkd[1687]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 04:55:04.572221 systemd-networkd[1687]: eth0: DHCP lease lost Sep 9 04:55:04.574200 waagent[1995]: 2025-09-09T04:55:04.572909Z INFO Daemon Daemon Create user account if not exists Sep 9 04:55:04.577581 waagent[1995]: 2025-09-09T04:55:04.577252Z INFO Daemon Daemon User core already exists, skip useradd Sep 9 04:55:04.581593 waagent[1995]: 2025-09-09T04:55:04.581560Z INFO Daemon Daemon Configure sudoer Sep 9 04:55:04.589626 waagent[1995]: 2025-09-09T04:55:04.589582Z INFO Daemon Daemon Configure sshd Sep 9 04:55:04.595047 systemd-networkd[1687]: eth0: DHCPv4 address 10.200.20.39/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 9 04:55:04.596572 waagent[1995]: 2025-09-09T04:55:04.596526Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 9 04:55:04.605043 waagent[1995]: 2025-09-09T04:55:04.605014Z INFO Daemon Daemon Deploy ssh public key. Sep 9 04:55:05.756785 waagent[1995]: 2025-09-09T04:55:05.753640Z INFO Daemon Daemon Provisioning complete Sep 9 04:55:05.770626 waagent[1995]: 2025-09-09T04:55:05.770592Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 9 04:55:05.774840 waagent[1995]: 2025-09-09T04:55:05.774809Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 9 04:55:05.781491 waagent[1995]: 2025-09-09T04:55:05.781465Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Sep 9 04:55:05.880037 waagent[2094]: 2025-09-09T04:55:05.879567Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Sep 9 04:55:05.880037 waagent[2094]: 2025-09-09T04:55:05.879682Z INFO ExtHandler ExtHandler OS: flatcar 4452.0.0 Sep 9 04:55:05.880037 waagent[2094]: 2025-09-09T04:55:05.879719Z INFO ExtHandler ExtHandler Python: 3.11.13 Sep 9 04:55:05.880037 waagent[2094]: 2025-09-09T04:55:05.879751Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Sep 9 04:55:05.943622 waagent[2094]: 2025-09-09T04:55:05.943557Z INFO ExtHandler ExtHandler Distro: flatcar-4452.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Sep 9 04:55:05.943757 waagent[2094]: 2025-09-09T04:55:05.943730Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 9 04:55:05.943797 waagent[2094]: 2025-09-09T04:55:05.943780Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 9 04:55:05.949591 waagent[2094]: 2025-09-09T04:55:05.949547Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 9 04:55:05.954320 waagent[2094]: 2025-09-09T04:55:05.954289Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 9 04:55:05.954661 waagent[2094]: 2025-09-09T04:55:05.954631Z INFO ExtHandler Sep 9 04:55:05.954710 waagent[2094]: 2025-09-09T04:55:05.954693Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 16ae5c40-449b-4091-bdd6-38c9bab0928b eTag: 4759667202882487600 source: Fabric] Sep 9 04:55:05.954923 waagent[2094]: 2025-09-09T04:55:05.954898Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 9 04:55:05.955334 waagent[2094]: 2025-09-09T04:55:05.955304Z INFO ExtHandler Sep 9 04:55:05.955372 waagent[2094]: 2025-09-09T04:55:05.955356Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 9 04:55:05.958461 waagent[2094]: 2025-09-09T04:55:05.958436Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 9 04:55:06.025037 waagent[2094]: 2025-09-09T04:55:06.024939Z INFO ExtHandler Downloaded certificate {'thumbprint': '95AAD099CFB138C4FB03B25D38DDFE008E79E2FD', 'hasPrivateKey': True} Sep 9 04:55:06.025354 waagent[2094]: 2025-09-09T04:55:06.025321Z INFO ExtHandler Fetch goal state completed Sep 9 04:55:06.036207 waagent[2094]: 2025-09-09T04:55:06.036165Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Sep 9 04:55:06.039604 waagent[2094]: 2025-09-09T04:55:06.039563Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2094 Sep 9 04:55:06.039697 waagent[2094]: 2025-09-09T04:55:06.039674Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 9 04:55:06.039930 waagent[2094]: 2025-09-09T04:55:06.039904Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Sep 9 04:55:06.040985 waagent[2094]: 2025-09-09T04:55:06.040950Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4452.0.0', '', 'Flatcar Container Linux by Kinvolk'] Sep 9 04:55:06.041328 waagent[2094]: 2025-09-09T04:55:06.041296Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4452.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Sep 9 04:55:06.041432 waagent[2094]: 2025-09-09T04:55:06.041411Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 9 04:55:06.041839 waagent[2094]: 2025-09-09T04:55:06.041809Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 9 04:55:06.110355 waagent[2094]: 2025-09-09T04:55:06.110323Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 9 04:55:06.110492 waagent[2094]: 2025-09-09T04:55:06.110466Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 9 04:55:06.115045 waagent[2094]: 2025-09-09T04:55:06.114612Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 9 04:55:06.119130 systemd[1]: Reload requested from client PID 2109 ('systemctl') (unit waagent.service)... Sep 9 04:55:06.119142 systemd[1]: Reloading... Sep 9 04:55:06.175034 zram_generator::config[2148]: No configuration found. Sep 9 04:55:06.330667 systemd[1]: Reloading finished in 211 ms. Sep 9 04:55:06.357367 waagent[2094]: 2025-09-09T04:55:06.356199Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 9 04:55:06.357367 waagent[2094]: 2025-09-09T04:55:06.356330Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 9 04:55:07.345093 waagent[2094]: 2025-09-09T04:55:07.344934Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 9 04:55:07.345372 waagent[2094]: 2025-09-09T04:55:07.345280Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 9 04:55:07.345950 waagent[2094]: 2025-09-09T04:55:07.345910Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 9 04:55:07.346258 waagent[2094]: 2025-09-09T04:55:07.346182Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 9 04:55:07.347005 waagent[2094]: 2025-09-09T04:55:07.346426Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 9 04:55:07.347005 waagent[2094]: 2025-09-09T04:55:07.346496Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 9 04:55:07.347005 waagent[2094]: 2025-09-09T04:55:07.346649Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 9 04:55:07.347005 waagent[2094]: 2025-09-09T04:55:07.346775Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 9 04:55:07.347005 waagent[2094]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 9 04:55:07.347005 waagent[2094]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 9 04:55:07.347005 waagent[2094]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 9 04:55:07.347005 waagent[2094]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 9 04:55:07.347005 waagent[2094]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 9 04:55:07.347005 waagent[2094]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 9 04:55:07.347273 waagent[2094]: 2025-09-09T04:55:07.347235Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 9 04:55:07.347320 waagent[2094]: 2025-09-09T04:55:07.347279Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 9 04:55:07.347665 waagent[2094]: 2025-09-09T04:55:07.347623Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 9 04:55:07.347782 waagent[2094]: 2025-09-09T04:55:07.347751Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 9 04:55:07.348282 waagent[2094]: 2025-09-09T04:55:07.348250Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 9 04:55:07.348393 waagent[2094]: 2025-09-09T04:55:07.348375Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 9 04:55:07.348492 waagent[2094]: 2025-09-09T04:55:07.348474Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 9 04:55:07.349698 waagent[2094]: 2025-09-09T04:55:07.349658Z INFO EnvHandler ExtHandler Configure routes Sep 9 04:55:07.349915 waagent[2094]: 2025-09-09T04:55:07.349892Z INFO EnvHandler ExtHandler Gateway:None Sep 9 04:55:07.350217 waagent[2094]: 2025-09-09T04:55:07.350191Z INFO EnvHandler ExtHandler Routes:None Sep 9 04:55:07.353620 waagent[2094]: 2025-09-09T04:55:07.353568Z INFO ExtHandler ExtHandler Sep 9 04:55:07.353679 waagent[2094]: 2025-09-09T04:55:07.353654Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: b1444ef3-f15e-4684-b041-4e9e48b707de correlation c91d0986-07a8-4c2c-a2c7-b90658be43ff created: 2025-09-09T04:53:48.107018Z] Sep 9 04:55:07.354283 waagent[2094]: 2025-09-09T04:55:07.354229Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 9 04:55:07.355363 waagent[2094]: 2025-09-09T04:55:07.355285Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Sep 9 04:55:07.377844 waagent[2094]: 2025-09-09T04:55:07.377786Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Sep 9 04:55:07.377844 waagent[2094]: Try `iptables -h' or 'iptables --help' for more information.) Sep 9 04:55:07.378176 waagent[2094]: 2025-09-09T04:55:07.378131Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: C4A49222-A4E6-43AD-8666-C4D99933135C;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Sep 9 04:55:07.440330 waagent[2094]: 2025-09-09T04:55:07.440271Z INFO MonitorHandler ExtHandler Network interfaces: Sep 9 04:55:07.440330 waagent[2094]: Executing ['ip', '-a', '-o', 'link']: Sep 9 04:55:07.440330 waagent[2094]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 9 04:55:07.440330 waagent[2094]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:77:2b:01 brd ff:ff:ff:ff:ff:ff Sep 9 04:55:07.440330 waagent[2094]: 3: enP9701s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:77:2b:01 brd ff:ff:ff:ff:ff:ff\ altname enP9701p0s2 Sep 9 04:55:07.440330 waagent[2094]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 9 04:55:07.440330 waagent[2094]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 9 04:55:07.440330 waagent[2094]: 2: eth0 inet 10.200.20.39/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 9 04:55:07.440330 waagent[2094]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 9 04:55:07.440330 waagent[2094]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 9 04:55:07.440330 waagent[2094]: 2: eth0 inet6 fe80::222:48ff:fe77:2b01/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 9 04:55:07.486569 waagent[2094]: 2025-09-09T04:55:07.486517Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Sep 9 04:55:07.486569 waagent[2094]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 9 04:55:07.486569 waagent[2094]: pkts bytes target prot opt in out source destination Sep 9 04:55:07.486569 waagent[2094]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 9 04:55:07.486569 waagent[2094]: pkts bytes target prot opt in out source destination Sep 9 04:55:07.486569 waagent[2094]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 9 04:55:07.486569 waagent[2094]: pkts bytes target prot opt in out source destination Sep 9 04:55:07.486569 waagent[2094]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 9 04:55:07.486569 waagent[2094]: 4 416 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 9 04:55:07.486569 waagent[2094]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 9 04:55:07.489159 waagent[2094]: 2025-09-09T04:55:07.489123Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 9 04:55:07.489159 waagent[2094]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 9 04:55:07.489159 waagent[2094]: pkts bytes target prot opt in out source destination Sep 9 04:55:07.489159 waagent[2094]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 9 04:55:07.489159 waagent[2094]: pkts bytes target prot opt in out source destination Sep 9 04:55:07.489159 waagent[2094]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 9 04:55:07.489159 waagent[2094]: pkts bytes target prot opt in out source destination Sep 9 04:55:07.489159 waagent[2094]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 9 04:55:07.489159 waagent[2094]: 9 816 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 9 04:55:07.489159 waagent[2094]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 9 04:55:07.489338 waagent[2094]: 2025-09-09T04:55:07.489315Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 9 04:55:12.603336 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 04:55:12.604596 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:55:12.945962 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:55:12.948915 (kubelet)[2243]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 04:55:12.980384 kubelet[2243]: E0909 04:55:12.980341 2243 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 04:55:12.982984 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 04:55:12.983102 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 04:55:12.983489 systemd[1]: kubelet.service: Consumed 109ms CPU time, 106.7M memory peak. Sep 9 04:55:23.234049 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 04:55:23.235329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:55:23.548826 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:55:23.551570 (kubelet)[2257]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 04:55:23.580415 kubelet[2257]: E0909 04:55:23.580370 2257 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 04:55:23.582212 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 04:55:23.582313 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 04:55:23.582710 systemd[1]: kubelet.service: Consumed 103ms CPU time, 107.6M memory peak. Sep 9 04:55:25.013813 chronyd[1828]: Selected source PHC0 Sep 9 04:55:33.832710 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 9 04:55:33.834056 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:55:34.195969 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:55:34.198759 (kubelet)[2272]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 04:55:34.223883 kubelet[2272]: E0909 04:55:34.223840 2272 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 04:55:34.225823 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 04:55:34.225925 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 04:55:34.226382 systemd[1]: kubelet.service: Consumed 103ms CPU time, 107.1M memory peak. Sep 9 04:55:34.952853 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 04:55:34.954842 systemd[1]: Started sshd@0-10.200.20.39:22-10.200.16.10:45430.service - OpenSSH per-connection server daemon (10.200.16.10:45430). Sep 9 04:55:35.608844 sshd[2279]: Accepted publickey for core from 10.200.16.10 port 45430 ssh2: RSA SHA256:3adVRBnOdwvD6jSty2nKG447nUgR57TdklVE855KMY8 Sep 9 04:55:35.609910 sshd-session[2279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:55:35.613802 systemd-logind[1850]: New session 3 of user core. Sep 9 04:55:35.620082 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 04:55:36.007401 systemd[1]: Started sshd@1-10.200.20.39:22-10.200.16.10:45444.service - OpenSSH per-connection server daemon (10.200.16.10:45444). Sep 9 04:55:36.431369 sshd[2285]: Accepted publickey for core from 10.200.16.10 port 45444 ssh2: RSA SHA256:3adVRBnOdwvD6jSty2nKG447nUgR57TdklVE855KMY8 Sep 9 04:55:36.434043 sshd-session[2285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:55:36.437498 systemd-logind[1850]: New session 4 of user core. Sep 9 04:55:36.448096 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 04:55:36.739150 sshd[2288]: Connection closed by 10.200.16.10 port 45444 Sep 9 04:55:36.739791 sshd-session[2285]: pam_unix(sshd:session): session closed for user core Sep 9 04:55:36.743286 systemd-logind[1850]: Session 4 logged out. Waiting for processes to exit. Sep 9 04:55:36.743530 systemd[1]: sshd@1-10.200.20.39:22-10.200.16.10:45444.service: Deactivated successfully. Sep 9 04:55:36.744836 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 04:55:36.747131 systemd-logind[1850]: Removed session 4. Sep 9 04:55:36.819383 systemd[1]: Started sshd@2-10.200.20.39:22-10.200.16.10:45456.service - OpenSSH per-connection server daemon (10.200.16.10:45456). Sep 9 04:55:37.249224 sshd[2294]: Accepted publickey for core from 10.200.16.10 port 45456 ssh2: RSA SHA256:3adVRBnOdwvD6jSty2nKG447nUgR57TdklVE855KMY8 Sep 9 04:55:37.250323 sshd-session[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:55:37.253859 systemd-logind[1850]: New session 5 of user core. Sep 9 04:55:37.261176 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 04:55:37.558094 sshd[2297]: Connection closed by 10.200.16.10 port 45456 Sep 9 04:55:37.558596 sshd-session[2294]: pam_unix(sshd:session): session closed for user core Sep 9 04:55:37.561930 systemd[1]: sshd@2-10.200.20.39:22-10.200.16.10:45456.service: Deactivated successfully. Sep 9 04:55:37.563181 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 04:55:37.563760 systemd-logind[1850]: Session 5 logged out. Waiting for processes to exit. Sep 9 04:55:37.564744 systemd-logind[1850]: Removed session 5. Sep 9 04:55:37.629275 systemd[1]: Started sshd@3-10.200.20.39:22-10.200.16.10:45464.service - OpenSSH per-connection server daemon (10.200.16.10:45464). Sep 9 04:55:38.044314 sshd[2303]: Accepted publickey for core from 10.200.16.10 port 45464 ssh2: RSA SHA256:3adVRBnOdwvD6jSty2nKG447nUgR57TdklVE855KMY8 Sep 9 04:55:38.045336 sshd-session[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:55:38.049165 systemd-logind[1850]: New session 6 of user core. Sep 9 04:55:38.055109 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 04:55:38.362751 sshd[2306]: Connection closed by 10.200.16.10 port 45464 Sep 9 04:55:38.363282 sshd-session[2303]: pam_unix(sshd:session): session closed for user core Sep 9 04:55:38.366381 systemd[1]: sshd@3-10.200.20.39:22-10.200.16.10:45464.service: Deactivated successfully. Sep 9 04:55:38.368057 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 04:55:38.368922 systemd-logind[1850]: Session 6 logged out. Waiting for processes to exit. Sep 9 04:55:38.370529 systemd-logind[1850]: Removed session 6. Sep 9 04:55:38.444704 systemd[1]: Started sshd@4-10.200.20.39:22-10.200.16.10:45468.service - OpenSSH per-connection server daemon (10.200.16.10:45468). Sep 9 04:55:38.868647 sshd[2312]: Accepted publickey for core from 10.200.16.10 port 45468 ssh2: RSA SHA256:3adVRBnOdwvD6jSty2nKG447nUgR57TdklVE855KMY8 Sep 9 04:55:38.869667 sshd-session[2312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:55:38.873339 systemd-logind[1850]: New session 7 of user core. Sep 9 04:55:38.880268 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 04:55:39.248207 sudo[2316]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 04:55:39.248424 sudo[2316]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 04:55:39.276388 sudo[2316]: pam_unix(sudo:session): session closed for user root Sep 9 04:55:39.357372 sshd[2315]: Connection closed by 10.200.16.10 port 45468 Sep 9 04:55:39.356608 sshd-session[2312]: pam_unix(sshd:session): session closed for user core Sep 9 04:55:39.359613 systemd-logind[1850]: Session 7 logged out. Waiting for processes to exit. Sep 9 04:55:39.359760 systemd[1]: sshd@4-10.200.20.39:22-10.200.16.10:45468.service: Deactivated successfully. Sep 9 04:55:39.361443 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 04:55:39.363398 systemd-logind[1850]: Removed session 7. Sep 9 04:55:39.432451 systemd[1]: Started sshd@5-10.200.20.39:22-10.200.16.10:45478.service - OpenSSH per-connection server daemon (10.200.16.10:45478). Sep 9 04:55:39.856565 sshd[2322]: Accepted publickey for core from 10.200.16.10 port 45478 ssh2: RSA SHA256:3adVRBnOdwvD6jSty2nKG447nUgR57TdklVE855KMY8 Sep 9 04:55:39.858022 sshd-session[2322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:55:39.861357 systemd-logind[1850]: New session 8 of user core. Sep 9 04:55:39.869275 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 04:55:40.097908 sudo[2327]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 04:55:40.098136 sudo[2327]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 04:55:40.104484 sudo[2327]: pam_unix(sudo:session): session closed for user root Sep 9 04:55:40.108033 sudo[2326]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 04:55:40.108232 sudo[2326]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 04:55:40.116257 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 04:55:40.144089 augenrules[2349]: No rules Sep 9 04:55:40.145246 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 04:55:40.145439 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 04:55:40.147437 sudo[2326]: pam_unix(sudo:session): session closed for user root Sep 9 04:55:40.218645 sshd[2325]: Connection closed by 10.200.16.10 port 45478 Sep 9 04:55:40.219174 sshd-session[2322]: pam_unix(sshd:session): session closed for user core Sep 9 04:55:40.222455 systemd[1]: sshd@5-10.200.20.39:22-10.200.16.10:45478.service: Deactivated successfully. Sep 9 04:55:40.223787 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 04:55:40.224366 systemd-logind[1850]: Session 8 logged out. Waiting for processes to exit. Sep 9 04:55:40.225706 systemd-logind[1850]: Removed session 8. Sep 9 04:55:40.311191 systemd[1]: Started sshd@6-10.200.20.39:22-10.200.16.10:37518.service - OpenSSH per-connection server daemon (10.200.16.10:37518). Sep 9 04:55:40.799129 sshd[2358]: Accepted publickey for core from 10.200.16.10 port 37518 ssh2: RSA SHA256:3adVRBnOdwvD6jSty2nKG447nUgR57TdklVE855KMY8 Sep 9 04:55:40.800135 sshd-session[2358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:55:40.803675 systemd-logind[1850]: New session 9 of user core. Sep 9 04:55:40.814330 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 04:55:41.073217 sudo[2362]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 04:55:41.073417 sudo[2362]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 04:55:42.628195 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 04:55:42.639239 (dockerd)[2379]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 04:55:43.343847 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Sep 9 04:55:43.665447 dockerd[2379]: time="2025-09-09T04:55:43.665131500Z" level=info msg="Starting up" Sep 9 04:55:43.666340 dockerd[2379]: time="2025-09-09T04:55:43.666318452Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 04:55:43.674675 dockerd[2379]: time="2025-09-09T04:55:43.674642888Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 04:55:43.736786 dockerd[2379]: time="2025-09-09T04:55:43.736746187Z" level=info msg="Loading containers: start." Sep 9 04:55:43.807010 kernel: Initializing XFRM netlink socket Sep 9 04:55:44.226734 systemd-networkd[1687]: docker0: Link UP Sep 9 04:55:44.243452 dockerd[2379]: time="2025-09-09T04:55:44.243411002Z" level=info msg="Loading containers: done." Sep 9 04:55:44.251942 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 9 04:55:44.253021 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck463891949-merged.mount: Deactivated successfully. Sep 9 04:55:44.255145 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:55:44.269660 dockerd[2379]: time="2025-09-09T04:55:44.269614182Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 04:55:44.269753 dockerd[2379]: time="2025-09-09T04:55:44.269701329Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 04:55:44.269799 dockerd[2379]: time="2025-09-09T04:55:44.269783484Z" level=info msg="Initializing buildkit" Sep 9 04:55:44.794075 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:55:44.796969 (kubelet)[2582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 04:55:44.821427 kubelet[2582]: E0909 04:55:44.821322 2582 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 04:55:44.823257 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 04:55:44.823364 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 04:55:44.823811 systemd[1]: kubelet.service: Consumed 105ms CPU time, 107M memory peak. Sep 9 04:55:44.832075 dockerd[2379]: time="2025-09-09T04:55:44.832026896Z" level=info msg="Completed buildkit initialization" Sep 9 04:55:44.837618 dockerd[2379]: time="2025-09-09T04:55:44.837574141Z" level=info msg="Daemon has completed initialization" Sep 9 04:55:44.838098 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 04:55:44.838642 dockerd[2379]: time="2025-09-09T04:55:44.838085502Z" level=info msg="API listen on /run/docker.sock" Sep 9 04:55:45.706327 containerd[1866]: time="2025-09-09T04:55:45.706284278Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 9 04:55:46.374916 update_engine[1852]: I20250909 04:55:46.374792 1852 update_attempter.cc:509] Updating boot flags... Sep 9 04:55:46.549444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3115091965.mount: Deactivated successfully. Sep 9 04:55:47.704878 containerd[1866]: time="2025-09-09T04:55:47.704822612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:47.708572 containerd[1866]: time="2025-09-09T04:55:47.708415174Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=26328357" Sep 9 04:55:47.712064 containerd[1866]: time="2025-09-09T04:55:47.712042082Z" level=info msg="ImageCreate event name:\"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:47.717203 containerd[1866]: time="2025-09-09T04:55:47.717171632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:47.717849 containerd[1866]: time="2025-09-09T04:55:47.717630696Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"26325157\" in 2.011310065s" Sep 9 04:55:47.717849 containerd[1866]: time="2025-09-09T04:55:47.717660985Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\"" Sep 9 04:55:47.718415 containerd[1866]: time="2025-09-09T04:55:47.718157970Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 9 04:55:48.894855 containerd[1866]: time="2025-09-09T04:55:48.894275984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:48.898698 containerd[1866]: time="2025-09-09T04:55:48.898672470Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=22528552" Sep 9 04:55:48.902340 containerd[1866]: time="2025-09-09T04:55:48.902320794Z" level=info msg="ImageCreate event name:\"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:48.906688 containerd[1866]: time="2025-09-09T04:55:48.906657726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:48.907395 containerd[1866]: time="2025-09-09T04:55:48.907243490Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"24065666\" in 1.18906351s" Sep 9 04:55:48.907395 containerd[1866]: time="2025-09-09T04:55:48.907273683Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\"" Sep 9 04:55:48.907667 containerd[1866]: time="2025-09-09T04:55:48.907649856Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 9 04:55:50.273523 containerd[1866]: time="2025-09-09T04:55:50.273469300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:50.277043 containerd[1866]: time="2025-09-09T04:55:50.277015267Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=17483527" Sep 9 04:55:50.280859 containerd[1866]: time="2025-09-09T04:55:50.280821234Z" level=info msg="ImageCreate event name:\"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:50.287107 containerd[1866]: time="2025-09-09T04:55:50.287056837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:50.287673 containerd[1866]: time="2025-09-09T04:55:50.287526308Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"19020659\" in 1.379690005s" Sep 9 04:55:50.287673 containerd[1866]: time="2025-09-09T04:55:50.287553509Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\"" Sep 9 04:55:50.288282 containerd[1866]: time="2025-09-09T04:55:50.288249098Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 9 04:55:51.781947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2344231852.mount: Deactivated successfully. Sep 9 04:55:52.093585 containerd[1866]: time="2025-09-09T04:55:52.093540691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:52.097297 containerd[1866]: time="2025-09-09T04:55:52.097270104Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=27376724" Sep 9 04:55:52.100885 containerd[1866]: time="2025-09-09T04:55:52.100855472Z" level=info msg="ImageCreate event name:\"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:52.105449 containerd[1866]: time="2025-09-09T04:55:52.105421543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:52.106096 containerd[1866]: time="2025-09-09T04:55:52.106072355Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"27375743\" in 1.817707877s" Sep 9 04:55:52.106127 containerd[1866]: time="2025-09-09T04:55:52.106100692Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 9 04:55:52.106592 containerd[1866]: time="2025-09-09T04:55:52.106571130Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 04:55:52.750960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount339646818.mount: Deactivated successfully. Sep 9 04:55:53.656625 containerd[1866]: time="2025-09-09T04:55:53.656515544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:53.659547 containerd[1866]: time="2025-09-09T04:55:53.659520614Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 9 04:55:53.662556 containerd[1866]: time="2025-09-09T04:55:53.662532236Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:53.671327 containerd[1866]: time="2025-09-09T04:55:53.671299686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:53.671770 containerd[1866]: time="2025-09-09T04:55:53.671656673Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.565060766s" Sep 9 04:55:53.671770 containerd[1866]: time="2025-09-09T04:55:53.671681130Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 9 04:55:53.672439 containerd[1866]: time="2025-09-09T04:55:53.672416985Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 04:55:54.234637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount938276291.mount: Deactivated successfully. Sep 9 04:55:54.256031 containerd[1866]: time="2025-09-09T04:55:54.255544785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 04:55:54.259403 containerd[1866]: time="2025-09-09T04:55:54.259381240Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 9 04:55:54.266185 containerd[1866]: time="2025-09-09T04:55:54.266164996Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 04:55:54.270137 containerd[1866]: time="2025-09-09T04:55:54.270113784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 04:55:54.270460 containerd[1866]: time="2025-09-09T04:55:54.270433770Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 597.991385ms" Sep 9 04:55:54.270460 containerd[1866]: time="2025-09-09T04:55:54.270461811Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 9 04:55:54.270878 containerd[1866]: time="2025-09-09T04:55:54.270854999Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 9 04:55:54.832417 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 9 04:55:54.833702 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:55:54.984739 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:55:54.990194 (kubelet)[2798]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 04:55:55.029305 kubelet[2798]: E0909 04:55:55.029251 2798 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 04:55:55.031261 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 04:55:55.031497 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 04:55:55.032045 systemd[1]: kubelet.service: Consumed 104ms CPU time, 104.9M memory peak. Sep 9 04:55:55.409175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount829630073.mount: Deactivated successfully. Sep 9 04:55:57.630361 containerd[1866]: time="2025-09-09T04:55:57.630150078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:57.633341 containerd[1866]: time="2025-09-09T04:55:57.633309057Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Sep 9 04:55:57.637136 containerd[1866]: time="2025-09-09T04:55:57.637108390Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:57.725830 containerd[1866]: time="2025-09-09T04:55:57.725724531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:55:57.727369 containerd[1866]: time="2025-09-09T04:55:57.727269651Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.456384571s" Sep 9 04:55:57.727369 containerd[1866]: time="2025-09-09T04:55:57.727307180Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 9 04:56:00.513037 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:56:00.513467 systemd[1]: kubelet.service: Consumed 104ms CPU time, 104.9M memory peak. Sep 9 04:56:00.515474 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:56:00.534894 systemd[1]: Reload requested from client PID 2885 ('systemctl') (unit session-9.scope)... Sep 9 04:56:00.535075 systemd[1]: Reloading... Sep 9 04:56:00.625061 zram_generator::config[2944]: No configuration found. Sep 9 04:56:00.770624 systemd[1]: Reloading finished in 235 ms. Sep 9 04:56:00.805571 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 04:56:00.805628 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 04:56:00.805952 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:56:00.806013 systemd[1]: kubelet.service: Consumed 75ms CPU time, 95M memory peak. Sep 9 04:56:00.807531 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:56:01.058215 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:56:01.066253 (kubelet)[2999]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 04:56:01.091905 kubelet[2999]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 04:56:01.091905 kubelet[2999]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 04:56:01.091905 kubelet[2999]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 04:56:01.092247 kubelet[2999]: I0909 04:56:01.091941 2999 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 04:56:01.436494 kubelet[2999]: I0909 04:56:01.436451 2999 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 04:56:01.436626 kubelet[2999]: I0909 04:56:01.436619 2999 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 04:56:01.436908 kubelet[2999]: I0909 04:56:01.436890 2999 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 04:56:01.457361 kubelet[2999]: E0909 04:56:01.457320 2999 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.39:6443: connect: connection refused" logger="UnhandledError" Sep 9 04:56:01.458928 kubelet[2999]: I0909 04:56:01.458904 2999 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 04:56:01.465432 kubelet[2999]: I0909 04:56:01.465408 2999 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 04:56:01.468076 kubelet[2999]: I0909 04:56:01.468051 2999 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 04:56:01.468865 kubelet[2999]: I0909 04:56:01.468828 2999 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 04:56:01.469020 kubelet[2999]: I0909 04:56:01.468865 2999 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4452.0.0-n-087888047c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 04:56:01.469099 kubelet[2999]: I0909 04:56:01.469030 2999 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 04:56:01.469099 kubelet[2999]: I0909 04:56:01.469037 2999 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 04:56:01.469183 kubelet[2999]: I0909 04:56:01.469169 2999 state_mem.go:36] "Initialized new in-memory state store" Sep 9 04:56:01.471669 kubelet[2999]: I0909 04:56:01.471649 2999 kubelet.go:446] "Attempting to sync node with API server" Sep 9 04:56:01.471690 kubelet[2999]: I0909 04:56:01.471676 2999 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 04:56:01.471717 kubelet[2999]: I0909 04:56:01.471698 2999 kubelet.go:352] "Adding apiserver pod source" Sep 9 04:56:01.471717 kubelet[2999]: I0909 04:56:01.471709 2999 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 04:56:01.477429 kubelet[2999]: W0909 04:56:01.477062 2999 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.39:6443: connect: connection refused Sep 9 04:56:01.477429 kubelet[2999]: E0909 04:56:01.477111 2999 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.39:6443: connect: connection refused" logger="UnhandledError" Sep 9 04:56:01.477972 kubelet[2999]: W0909 04:56:01.477917 2999 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4452.0.0-n-087888047c&limit=500&resourceVersion=0": dial tcp 10.200.20.39:6443: connect: connection refused Sep 9 04:56:01.477972 kubelet[2999]: E0909 04:56:01.477944 2999 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4452.0.0-n-087888047c&limit=500&resourceVersion=0\": dial tcp 10.200.20.39:6443: connect: connection refused" logger="UnhandledError" Sep 9 04:56:01.478059 kubelet[2999]: I0909 04:56:01.478030 2999 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 04:56:01.478452 kubelet[2999]: I0909 04:56:01.478359 2999 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 04:56:01.478452 kubelet[2999]: W0909 04:56:01.478406 2999 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 04:56:01.478881 kubelet[2999]: I0909 04:56:01.478862 2999 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 04:56:01.478929 kubelet[2999]: I0909 04:56:01.478895 2999 server.go:1287] "Started kubelet" Sep 9 04:56:01.480020 kubelet[2999]: I0909 04:56:01.479675 2999 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 04:56:01.480294 kubelet[2999]: I0909 04:56:01.480277 2999 server.go:479] "Adding debug handlers to kubelet server" Sep 9 04:56:01.482010 kubelet[2999]: I0909 04:56:01.481979 2999 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 04:56:01.482302 kubelet[2999]: I0909 04:56:01.482254 2999 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 04:56:01.482567 kubelet[2999]: I0909 04:56:01.482552 2999 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 04:56:01.483222 kubelet[2999]: E0909 04:56:01.483116 2999 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.39:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.39:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4452.0.0-n-087888047c.18638450a36fa729 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4452.0.0-n-087888047c,UID:ci-4452.0.0-n-087888047c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4452.0.0-n-087888047c,},FirstTimestamp:2025-09-09 04:56:01.478879017 +0000 UTC m=+0.409914812,LastTimestamp:2025-09-09 04:56:01.478879017 +0000 UTC m=+0.409914812,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4452.0.0-n-087888047c,}" Sep 9 04:56:01.485330 kubelet[2999]: I0909 04:56:01.485309 2999 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 04:56:01.486999 kubelet[2999]: E0909 04:56:01.486584 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4452.0.0-n-087888047c\" not found" Sep 9 04:56:01.487131 kubelet[2999]: I0909 04:56:01.487117 2999 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 04:56:01.487333 kubelet[2999]: I0909 04:56:01.487316 2999 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 04:56:01.487601 kubelet[2999]: I0909 04:56:01.487586 2999 reconciler.go:26] "Reconciler: start to sync state" Sep 9 04:56:01.487970 kubelet[2999]: W0909 04:56:01.487940 2999 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.39:6443: connect: connection refused Sep 9 04:56:01.488106 kubelet[2999]: E0909 04:56:01.488086 2999 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.39:6443: connect: connection refused" logger="UnhandledError" Sep 9 04:56:01.489151 kubelet[2999]: E0909 04:56:01.489122 2999 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4452.0.0-n-087888047c?timeout=10s\": dial tcp 10.200.20.39:6443: connect: connection refused" interval="200ms" Sep 9 04:56:01.489398 kubelet[2999]: I0909 04:56:01.489377 2999 factory.go:221] Registration of the systemd container factory successfully Sep 9 04:56:01.489541 kubelet[2999]: I0909 04:56:01.489525 2999 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 04:56:01.490355 kubelet[2999]: E0909 04:56:01.490339 2999 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 04:56:01.490729 kubelet[2999]: I0909 04:56:01.490710 2999 factory.go:221] Registration of the containerd container factory successfully Sep 9 04:56:01.516109 kubelet[2999]: I0909 04:56:01.516089 2999 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 04:56:01.516260 kubelet[2999]: I0909 04:56:01.516250 2999 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 04:56:01.516303 kubelet[2999]: I0909 04:56:01.516297 2999 state_mem.go:36] "Initialized new in-memory state store" Sep 9 04:56:01.587234 kubelet[2999]: E0909 04:56:01.587197 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4452.0.0-n-087888047c\" not found" Sep 9 04:56:01.628079 kubelet[2999]: I0909 04:56:01.628043 2999 policy_none.go:49] "None policy: Start" Sep 9 04:56:01.628579 kubelet[2999]: I0909 04:56:01.628270 2999 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 04:56:01.628579 kubelet[2999]: I0909 04:56:01.628292 2999 state_mem.go:35] "Initializing new in-memory state store" Sep 9 04:56:01.638398 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 04:56:01.648448 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 04:56:01.652200 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 04:56:01.659834 kubelet[2999]: I0909 04:56:01.659784 2999 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 04:56:01.660876 kubelet[2999]: I0909 04:56:01.660848 2999 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 04:56:01.660876 kubelet[2999]: I0909 04:56:01.660872 2999 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 04:56:01.660963 kubelet[2999]: I0909 04:56:01.660889 2999 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 04:56:01.660963 kubelet[2999]: I0909 04:56:01.660895 2999 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 04:56:01.660963 kubelet[2999]: E0909 04:56:01.660932 2999 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 04:56:01.662305 kubelet[2999]: I0909 04:56:01.661638 2999 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 04:56:01.662305 kubelet[2999]: I0909 04:56:01.661810 2999 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 04:56:01.662305 kubelet[2999]: I0909 04:56:01.661821 2999 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 04:56:01.663583 kubelet[2999]: I0909 04:56:01.663565 2999 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 04:56:01.664767 kubelet[2999]: W0909 04:56:01.664727 2999 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.39:6443: connect: connection refused Sep 9 04:56:01.664837 kubelet[2999]: E0909 04:56:01.664773 2999 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.39:6443: connect: connection refused" logger="UnhandledError" Sep 9 04:56:01.665179 kubelet[2999]: E0909 04:56:01.665164 2999 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 04:56:01.665287 kubelet[2999]: E0909 04:56:01.665277 2999 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4452.0.0-n-087888047c\" not found" Sep 9 04:56:01.690191 kubelet[2999]: E0909 04:56:01.690066 2999 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4452.0.0-n-087888047c?timeout=10s\": dial tcp 10.200.20.39:6443: connect: connection refused" interval="400ms" Sep 9 04:56:01.764010 kubelet[2999]: I0909 04:56:01.763754 2999 kubelet_node_status.go:75] "Attempting to register node" node="ci-4452.0.0-n-087888047c" Sep 9 04:56:01.764118 kubelet[2999]: E0909 04:56:01.764071 2999 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.39:6443/api/v1/nodes\": dial tcp 10.200.20.39:6443: connect: connection refused" node="ci-4452.0.0-n-087888047c" Sep 9 04:56:01.770169 systemd[1]: Created slice kubepods-burstable-podac5bd3042885e29d49b6a54a4a80da3d.slice - libcontainer container kubepods-burstable-podac5bd3042885e29d49b6a54a4a80da3d.slice. Sep 9 04:56:01.777777 kubelet[2999]: E0909 04:56:01.777647 2999 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4452.0.0-n-087888047c\" not found" node="ci-4452.0.0-n-087888047c" Sep 9 04:56:01.779895 systemd[1]: Created slice kubepods-burstable-pod62e51bf24f403d84420ff87ce0bb4118.slice - libcontainer container kubepods-burstable-pod62e51bf24f403d84420ff87ce0bb4118.slice. Sep 9 04:56:01.781896 kubelet[2999]: E0909 04:56:01.781872 2999 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4452.0.0-n-087888047c\" not found" node="ci-4452.0.0-n-087888047c" Sep 9 04:56:01.789085 kubelet[2999]: I0909 04:56:01.789057 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a64348e0a8591af95fa22648035f3d69-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4452.0.0-n-087888047c\" (UID: \"a64348e0a8591af95fa22648035f3d69\") " pod="kube-system/kube-apiserver-ci-4452.0.0-n-087888047c" Sep 9 04:56:01.789085 kubelet[2999]: I0909 04:56:01.789085 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ac5bd3042885e29d49b6a54a4a80da3d-ca-certs\") pod \"kube-controller-manager-ci-4452.0.0-n-087888047c\" (UID: \"ac5bd3042885e29d49b6a54a4a80da3d\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-087888047c" Sep 9 04:56:01.789172 kubelet[2999]: I0909 04:56:01.789099 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ac5bd3042885e29d49b6a54a4a80da3d-kubeconfig\") pod \"kube-controller-manager-ci-4452.0.0-n-087888047c\" (UID: \"ac5bd3042885e29d49b6a54a4a80da3d\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-087888047c" Sep 9 04:56:01.789172 kubelet[2999]: I0909 04:56:01.789110 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/62e51bf24f403d84420ff87ce0bb4118-kubeconfig\") pod \"kube-scheduler-ci-4452.0.0-n-087888047c\" (UID: \"62e51bf24f403d84420ff87ce0bb4118\") " pod="kube-system/kube-scheduler-ci-4452.0.0-n-087888047c" Sep 9 04:56:01.789172 kubelet[2999]: I0909 04:56:01.789122 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a64348e0a8591af95fa22648035f3d69-ca-certs\") pod \"kube-apiserver-ci-4452.0.0-n-087888047c\" (UID: \"a64348e0a8591af95fa22648035f3d69\") " pod="kube-system/kube-apiserver-ci-4452.0.0-n-087888047c" Sep 9 04:56:01.789172 kubelet[2999]: I0909 04:56:01.789131 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a64348e0a8591af95fa22648035f3d69-k8s-certs\") pod \"kube-apiserver-ci-4452.0.0-n-087888047c\" (UID: \"a64348e0a8591af95fa22648035f3d69\") " pod="kube-system/kube-apiserver-ci-4452.0.0-n-087888047c" Sep 9 04:56:01.789172 kubelet[2999]: I0909 04:56:01.789147 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ac5bd3042885e29d49b6a54a4a80da3d-flexvolume-dir\") pod \"kube-controller-manager-ci-4452.0.0-n-087888047c\" (UID: \"ac5bd3042885e29d49b6a54a4a80da3d\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-087888047c" Sep 9 04:56:01.789253 kubelet[2999]: I0909 04:56:01.789158 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ac5bd3042885e29d49b6a54a4a80da3d-k8s-certs\") pod \"kube-controller-manager-ci-4452.0.0-n-087888047c\" (UID: \"ac5bd3042885e29d49b6a54a4a80da3d\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-087888047c" Sep 9 04:56:01.789253 kubelet[2999]: I0909 04:56:01.789169 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ac5bd3042885e29d49b6a54a4a80da3d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4452.0.0-n-087888047c\" (UID: \"ac5bd3042885e29d49b6a54a4a80da3d\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-087888047c" Sep 9 04:56:01.789623 systemd[1]: Created slice kubepods-burstable-poda64348e0a8591af95fa22648035f3d69.slice - libcontainer container kubepods-burstable-poda64348e0a8591af95fa22648035f3d69.slice. Sep 9 04:56:01.790942 kubelet[2999]: E0909 04:56:01.790923 2999 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4452.0.0-n-087888047c\" not found" node="ci-4452.0.0-n-087888047c" Sep 9 04:56:01.965835 kubelet[2999]: I0909 04:56:01.965736 2999 kubelet_node_status.go:75] "Attempting to register node" node="ci-4452.0.0-n-087888047c" Sep 9 04:56:01.966590 kubelet[2999]: E0909 04:56:01.966558 2999 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.39:6443/api/v1/nodes\": dial tcp 10.200.20.39:6443: connect: connection refused" node="ci-4452.0.0-n-087888047c" Sep 9 04:56:02.079236 containerd[1866]: time="2025-09-09T04:56:02.079182653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4452.0.0-n-087888047c,Uid:ac5bd3042885e29d49b6a54a4a80da3d,Namespace:kube-system,Attempt:0,}" Sep 9 04:56:02.083227 containerd[1866]: time="2025-09-09T04:56:02.083199250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4452.0.0-n-087888047c,Uid:62e51bf24f403d84420ff87ce0bb4118,Namespace:kube-system,Attempt:0,}" Sep 9 04:56:02.091977 kubelet[2999]: E0909 04:56:02.091941 2999 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4452.0.0-n-087888047c?timeout=10s\": dial tcp 10.200.20.39:6443: connect: connection refused" interval="800ms" Sep 9 04:56:02.094933 containerd[1866]: time="2025-09-09T04:56:02.094903408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4452.0.0-n-087888047c,Uid:a64348e0a8591af95fa22648035f3d69,Namespace:kube-system,Attempt:0,}" Sep 9 04:56:02.242376 containerd[1866]: time="2025-09-09T04:56:02.242276433Z" level=info msg="connecting to shim 936ac74d37304f0f7dc5a83a940a9c0b880f3a944d272141d9383f29591892cd" address="unix:///run/containerd/s/bd55b17d555f6d67bea0a953a57c72c7e8283af3105b3f2d0a156589bab409b1" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:56:02.248887 containerd[1866]: time="2025-09-09T04:56:02.248843988Z" level=info msg="connecting to shim 3031e533fc6e9fcc4ac4b7fd76e66784a03364dbb514b89c1d30c6ff91536a18" address="unix:///run/containerd/s/4996f8c3fb6c4809915da059aadc6182f2473887aa4032e2d6f13165c1467776" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:56:02.255887 containerd[1866]: time="2025-09-09T04:56:02.255846437Z" level=info msg="connecting to shim 4c41560657cc093b3db3736be916b01711605815f4df740d563a38034769e40a" address="unix:///run/containerd/s/cf7e582209a60d1189fabe944d7607ddd467ca1a867964be9ee92fcdc68d97c3" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:56:02.278131 systemd[1]: Started cri-containerd-936ac74d37304f0f7dc5a83a940a9c0b880f3a944d272141d9383f29591892cd.scope - libcontainer container 936ac74d37304f0f7dc5a83a940a9c0b880f3a944d272141d9383f29591892cd. Sep 9 04:56:02.283651 systemd[1]: Started cri-containerd-4c41560657cc093b3db3736be916b01711605815f4df740d563a38034769e40a.scope - libcontainer container 4c41560657cc093b3db3736be916b01711605815f4df740d563a38034769e40a. Sep 9 04:56:02.294154 systemd[1]: Started cri-containerd-3031e533fc6e9fcc4ac4b7fd76e66784a03364dbb514b89c1d30c6ff91536a18.scope - libcontainer container 3031e533fc6e9fcc4ac4b7fd76e66784a03364dbb514b89c1d30c6ff91536a18. Sep 9 04:56:02.323891 containerd[1866]: time="2025-09-09T04:56:02.323643309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4452.0.0-n-087888047c,Uid:ac5bd3042885e29d49b6a54a4a80da3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"936ac74d37304f0f7dc5a83a940a9c0b880f3a944d272141d9383f29591892cd\"" Sep 9 04:56:02.332165 containerd[1866]: time="2025-09-09T04:56:02.332129720Z" level=info msg="CreateContainer within sandbox \"936ac74d37304f0f7dc5a83a940a9c0b880f3a944d272141d9383f29591892cd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 04:56:02.332294 kubelet[2999]: W0909 04:56:02.330966 2999 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.39:6443: connect: connection refused Sep 9 04:56:02.332494 kubelet[2999]: E0909 04:56:02.332375 2999 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.39:6443: connect: connection refused" logger="UnhandledError" Sep 9 04:56:02.346344 containerd[1866]: time="2025-09-09T04:56:02.346310600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4452.0.0-n-087888047c,Uid:a64348e0a8591af95fa22648035f3d69,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c41560657cc093b3db3736be916b01711605815f4df740d563a38034769e40a\"" Sep 9 04:56:02.349793 containerd[1866]: time="2025-09-09T04:56:02.349293315Z" level=info msg="CreateContainer within sandbox \"4c41560657cc093b3db3736be916b01711605815f4df740d563a38034769e40a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 04:56:02.350339 containerd[1866]: time="2025-09-09T04:56:02.350316357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4452.0.0-n-087888047c,Uid:62e51bf24f403d84420ff87ce0bb4118,Namespace:kube-system,Attempt:0,} returns sandbox id \"3031e533fc6e9fcc4ac4b7fd76e66784a03364dbb514b89c1d30c6ff91536a18\"" Sep 9 04:56:02.354789 containerd[1866]: time="2025-09-09T04:56:02.354747424Z" level=info msg="CreateContainer within sandbox \"3031e533fc6e9fcc4ac4b7fd76e66784a03364dbb514b89c1d30c6ff91536a18\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 04:56:02.358280 containerd[1866]: time="2025-09-09T04:56:02.358253037Z" level=info msg="Container dc62005daada06a273dcb65a9a48f4d0234ab2f632d04969bb3c2d391ca4203e: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:56:02.368291 kubelet[2999]: I0909 04:56:02.368264 2999 kubelet_node_status.go:75] "Attempting to register node" node="ci-4452.0.0-n-087888047c" Sep 9 04:56:02.368816 kubelet[2999]: E0909 04:56:02.368788 2999 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.39:6443/api/v1/nodes\": dial tcp 10.200.20.39:6443: connect: connection refused" node="ci-4452.0.0-n-087888047c" Sep 9 04:56:02.401715 containerd[1866]: time="2025-09-09T04:56:02.401173394Z" level=info msg="Container 536a31e3e562d3b10332875871adec156c18a8ce9a4d2971b8ceb47b72ed75f1: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:56:02.402823 containerd[1866]: time="2025-09-09T04:56:02.402793856Z" level=info msg="CreateContainer within sandbox \"936ac74d37304f0f7dc5a83a940a9c0b880f3a944d272141d9383f29591892cd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dc62005daada06a273dcb65a9a48f4d0234ab2f632d04969bb3c2d391ca4203e\"" Sep 9 04:56:02.403682 containerd[1866]: time="2025-09-09T04:56:02.403661396Z" level=info msg="StartContainer for \"dc62005daada06a273dcb65a9a48f4d0234ab2f632d04969bb3c2d391ca4203e\"" Sep 9 04:56:02.405037 containerd[1866]: time="2025-09-09T04:56:02.405015857Z" level=info msg="connecting to shim dc62005daada06a273dcb65a9a48f4d0234ab2f632d04969bb3c2d391ca4203e" address="unix:///run/containerd/s/bd55b17d555f6d67bea0a953a57c72c7e8283af3105b3f2d0a156589bab409b1" protocol=ttrpc version=3 Sep 9 04:56:02.422141 systemd[1]: Started cri-containerd-dc62005daada06a273dcb65a9a48f4d0234ab2f632d04969bb3c2d391ca4203e.scope - libcontainer container dc62005daada06a273dcb65a9a48f4d0234ab2f632d04969bb3c2d391ca4203e. Sep 9 04:56:02.425134 containerd[1866]: time="2025-09-09T04:56:02.425102310Z" level=info msg="Container 7c366a0643bfa456f6b4769908479d2f51e01063aac0ca296b8899c505795af6: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:56:02.434533 containerd[1866]: time="2025-09-09T04:56:02.434492087Z" level=info msg="CreateContainer within sandbox \"4c41560657cc093b3db3736be916b01711605815f4df740d563a38034769e40a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"536a31e3e562d3b10332875871adec156c18a8ce9a4d2971b8ceb47b72ed75f1\"" Sep 9 04:56:02.435551 containerd[1866]: time="2025-09-09T04:56:02.435525577Z" level=info msg="StartContainer for \"536a31e3e562d3b10332875871adec156c18a8ce9a4d2971b8ceb47b72ed75f1\"" Sep 9 04:56:02.436526 containerd[1866]: time="2025-09-09T04:56:02.436484025Z" level=info msg="connecting to shim 536a31e3e562d3b10332875871adec156c18a8ce9a4d2971b8ceb47b72ed75f1" address="unix:///run/containerd/s/cf7e582209a60d1189fabe944d7607ddd467ca1a867964be9ee92fcdc68d97c3" protocol=ttrpc version=3 Sep 9 04:56:02.445638 containerd[1866]: time="2025-09-09T04:56:02.444350647Z" level=info msg="CreateContainer within sandbox \"3031e533fc6e9fcc4ac4b7fd76e66784a03364dbb514b89c1d30c6ff91536a18\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7c366a0643bfa456f6b4769908479d2f51e01063aac0ca296b8899c505795af6\"" Sep 9 04:56:02.446376 containerd[1866]: time="2025-09-09T04:56:02.446306672Z" level=info msg="StartContainer for \"7c366a0643bfa456f6b4769908479d2f51e01063aac0ca296b8899c505795af6\"" Sep 9 04:56:02.449811 containerd[1866]: time="2025-09-09T04:56:02.449779595Z" level=info msg="connecting to shim 7c366a0643bfa456f6b4769908479d2f51e01063aac0ca296b8899c505795af6" address="unix:///run/containerd/s/4996f8c3fb6c4809915da059aadc6182f2473887aa4032e2d6f13165c1467776" protocol=ttrpc version=3 Sep 9 04:56:02.468077 systemd[1]: Started cri-containerd-536a31e3e562d3b10332875871adec156c18a8ce9a4d2971b8ceb47b72ed75f1.scope - libcontainer container 536a31e3e562d3b10332875871adec156c18a8ce9a4d2971b8ceb47b72ed75f1. Sep 9 04:56:02.469754 kubelet[2999]: W0909 04:56:02.469662 2999 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4452.0.0-n-087888047c&limit=500&resourceVersion=0": dial tcp 10.200.20.39:6443: connect: connection refused Sep 9 04:56:02.469915 kubelet[2999]: E0909 04:56:02.469740 2999 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4452.0.0-n-087888047c&limit=500&resourceVersion=0\": dial tcp 10.200.20.39:6443: connect: connection refused" logger="UnhandledError" Sep 9 04:56:02.477817 containerd[1866]: time="2025-09-09T04:56:02.477771263Z" level=info msg="StartContainer for \"dc62005daada06a273dcb65a9a48f4d0234ab2f632d04969bb3c2d391ca4203e\" returns successfully" Sep 9 04:56:02.478166 systemd[1]: Started cri-containerd-7c366a0643bfa456f6b4769908479d2f51e01063aac0ca296b8899c505795af6.scope - libcontainer container 7c366a0643bfa456f6b4769908479d2f51e01063aac0ca296b8899c505795af6. Sep 9 04:56:02.533490 containerd[1866]: time="2025-09-09T04:56:02.533258518Z" level=info msg="StartContainer for \"536a31e3e562d3b10332875871adec156c18a8ce9a4d2971b8ceb47b72ed75f1\" returns successfully" Sep 9 04:56:02.547940 containerd[1866]: time="2025-09-09T04:56:02.547884605Z" level=info msg="StartContainer for \"7c366a0643bfa456f6b4769908479d2f51e01063aac0ca296b8899c505795af6\" returns successfully" Sep 9 04:56:02.673153 kubelet[2999]: E0909 04:56:02.672948 2999 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4452.0.0-n-087888047c\" not found" node="ci-4452.0.0-n-087888047c" Sep 9 04:56:02.677003 kubelet[2999]: E0909 04:56:02.676180 2999 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4452.0.0-n-087888047c\" not found" node="ci-4452.0.0-n-087888047c" Sep 9 04:56:02.679685 kubelet[2999]: E0909 04:56:02.679555 2999 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4452.0.0-n-087888047c\" not found" node="ci-4452.0.0-n-087888047c" Sep 9 04:56:03.172029 kubelet[2999]: I0909 04:56:03.171240 2999 kubelet_node_status.go:75] "Attempting to register node" node="ci-4452.0.0-n-087888047c" Sep 9 04:56:03.681810 kubelet[2999]: E0909 04:56:03.681772 2999 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4452.0.0-n-087888047c\" not found" node="ci-4452.0.0-n-087888047c" Sep 9 04:56:03.682172 kubelet[2999]: E0909 04:56:03.682153 2999 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4452.0.0-n-087888047c\" not found" node="ci-4452.0.0-n-087888047c" Sep 9 04:56:03.814446 kubelet[2999]: E0909 04:56:03.814410 2999 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4452.0.0-n-087888047c\" not found" node="ci-4452.0.0-n-087888047c" Sep 9 04:56:03.859856 kubelet[2999]: I0909 04:56:03.859813 2999 kubelet_node_status.go:78] "Successfully registered node" node="ci-4452.0.0-n-087888047c" Sep 9 04:56:03.859856 kubelet[2999]: E0909 04:56:03.859857 2999 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4452.0.0-n-087888047c\": node \"ci-4452.0.0-n-087888047c\" not found" Sep 9 04:56:03.867187 kubelet[2999]: E0909 04:56:03.867147 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4452.0.0-n-087888047c\" not found" Sep 9 04:56:03.968315 kubelet[2999]: E0909 04:56:03.968184 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4452.0.0-n-087888047c\" not found" Sep 9 04:56:04.068405 kubelet[2999]: E0909 04:56:04.068354 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4452.0.0-n-087888047c\" not found" Sep 9 04:56:04.168917 kubelet[2999]: E0909 04:56:04.168873 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4452.0.0-n-087888047c\" not found" Sep 9 04:56:04.269535 kubelet[2999]: E0909 04:56:04.269405 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4452.0.0-n-087888047c\" not found" Sep 9 04:56:04.369933 kubelet[2999]: E0909 04:56:04.369888 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4452.0.0-n-087888047c\" not found" Sep 9 04:56:04.470681 kubelet[2999]: E0909 04:56:04.470564 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4452.0.0-n-087888047c\" not found" Sep 9 04:56:04.571479 kubelet[2999]: E0909 04:56:04.571354 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4452.0.0-n-087888047c\" not found" Sep 9 04:56:04.672243 kubelet[2999]: E0909 04:56:04.672195 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4452.0.0-n-087888047c\" not found" Sep 9 04:56:04.772759 kubelet[2999]: E0909 04:56:04.772706 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4452.0.0-n-087888047c\" not found" Sep 9 04:56:04.873331 kubelet[2999]: E0909 04:56:04.873280 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4452.0.0-n-087888047c\" not found" Sep 9 04:56:04.974079 kubelet[2999]: E0909 04:56:04.973942 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4452.0.0-n-087888047c\" not found" Sep 9 04:56:05.074151 kubelet[2999]: E0909 04:56:05.074085 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4452.0.0-n-087888047c\" not found" Sep 9 04:56:05.174800 kubelet[2999]: E0909 04:56:05.174671 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4452.0.0-n-087888047c\" not found" Sep 9 04:56:05.275306 kubelet[2999]: E0909 04:56:05.275260 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4452.0.0-n-087888047c\" not found" Sep 9 04:56:05.375907 kubelet[2999]: E0909 04:56:05.375863 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4452.0.0-n-087888047c\" not found" Sep 9 04:56:05.476655 kubelet[2999]: E0909 04:56:05.476530 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4452.0.0-n-087888047c\" not found" Sep 9 04:56:05.588622 kubelet[2999]: I0909 04:56:05.588589 2999 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4452.0.0-n-087888047c" Sep 9 04:56:05.600430 kubelet[2999]: W0909 04:56:05.600384 2999 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 04:56:05.600560 kubelet[2999]: I0909 04:56:05.600516 2999 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4452.0.0-n-087888047c" Sep 9 04:56:05.604615 kubelet[2999]: W0909 04:56:05.604590 2999 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 04:56:05.604880 kubelet[2999]: I0909 04:56:05.604803 2999 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4452.0.0-n-087888047c" Sep 9 04:56:05.611754 kubelet[2999]: W0909 04:56:05.611662 2999 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 04:56:06.104602 systemd[1]: Reload requested from client PID 3272 ('systemctl') (unit session-9.scope)... Sep 9 04:56:06.104614 systemd[1]: Reloading... Sep 9 04:56:06.181061 zram_generator::config[3337]: No configuration found. Sep 9 04:56:06.327480 systemd[1]: Reloading finished in 222 ms. Sep 9 04:56:06.357792 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:56:06.374946 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 04:56:06.375160 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:56:06.375214 systemd[1]: kubelet.service: Consumed 640ms CPU time, 125.3M memory peak. Sep 9 04:56:06.376651 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 04:56:06.482275 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 04:56:06.489283 (kubelet)[3383]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 04:56:06.592649 kubelet[3383]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 04:56:06.593060 kubelet[3383]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 04:56:06.593096 kubelet[3383]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 04:56:06.593250 kubelet[3383]: I0909 04:56:06.593220 3383 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 04:56:06.599418 kubelet[3383]: I0909 04:56:06.599392 3383 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 04:56:06.599525 kubelet[3383]: I0909 04:56:06.599508 3383 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 04:56:06.600411 kubelet[3383]: I0909 04:56:06.600258 3383 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 04:56:06.601407 kubelet[3383]: I0909 04:56:06.601388 3383 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 04:56:06.603071 kubelet[3383]: I0909 04:56:06.603047 3383 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 04:56:06.607423 kubelet[3383]: I0909 04:56:06.606263 3383 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 04:56:06.609387 kubelet[3383]: I0909 04:56:06.609097 3383 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 04:56:06.609637 kubelet[3383]: I0909 04:56:06.609612 3383 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 04:56:06.609818 kubelet[3383]: I0909 04:56:06.609695 3383 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4452.0.0-n-087888047c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 04:56:06.609932 kubelet[3383]: I0909 04:56:06.609921 3383 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 04:56:06.609977 kubelet[3383]: I0909 04:56:06.609970 3383 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 04:56:06.610066 kubelet[3383]: I0909 04:56:06.610058 3383 state_mem.go:36] "Initialized new in-memory state store" Sep 9 04:56:06.610226 kubelet[3383]: I0909 04:56:06.610216 3383 kubelet.go:446] "Attempting to sync node with API server" Sep 9 04:56:06.610368 kubelet[3383]: I0909 04:56:06.610347 3383 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 04:56:06.610438 kubelet[3383]: I0909 04:56:06.610429 3383 kubelet.go:352] "Adding apiserver pod source" Sep 9 04:56:06.610492 kubelet[3383]: I0909 04:56:06.610483 3383 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 04:56:06.613138 kubelet[3383]: I0909 04:56:06.613118 3383 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 04:56:06.613504 kubelet[3383]: I0909 04:56:06.613479 3383 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 04:56:06.613912 kubelet[3383]: I0909 04:56:06.613883 3383 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 04:56:06.613912 kubelet[3383]: I0909 04:56:06.613918 3383 server.go:1287] "Started kubelet" Sep 9 04:56:06.625385 kubelet[3383]: I0909 04:56:06.623899 3383 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 04:56:06.625385 kubelet[3383]: I0909 04:56:06.624110 3383 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 04:56:06.625385 kubelet[3383]: I0909 04:56:06.624183 3383 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 04:56:06.625385 kubelet[3383]: I0909 04:56:06.624721 3383 server.go:479] "Adding debug handlers to kubelet server" Sep 9 04:56:06.626041 kubelet[3383]: I0909 04:56:06.625999 3383 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 04:56:06.626286 kubelet[3383]: I0909 04:56:06.626270 3383 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 04:56:06.628648 kubelet[3383]: I0909 04:56:06.628630 3383 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 04:56:06.628876 kubelet[3383]: I0909 04:56:06.628862 3383 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 04:56:06.629062 kubelet[3383]: I0909 04:56:06.629052 3383 reconciler.go:26] "Reconciler: start to sync state" Sep 9 04:56:06.630306 kubelet[3383]: I0909 04:56:06.630287 3383 factory.go:221] Registration of the systemd container factory successfully Sep 9 04:56:06.630548 kubelet[3383]: I0909 04:56:06.630529 3383 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 04:56:06.632586 kubelet[3383]: E0909 04:56:06.632568 3383 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 04:56:06.633287 kubelet[3383]: I0909 04:56:06.633269 3383 factory.go:221] Registration of the containerd container factory successfully Sep 9 04:56:06.637374 kubelet[3383]: I0909 04:56:06.637331 3383 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 04:56:06.638142 kubelet[3383]: I0909 04:56:06.638115 3383 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 04:56:06.638142 kubelet[3383]: I0909 04:56:06.638134 3383 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 04:56:06.638227 kubelet[3383]: I0909 04:56:06.638152 3383 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 04:56:06.638227 kubelet[3383]: I0909 04:56:06.638158 3383 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 04:56:06.638227 kubelet[3383]: E0909 04:56:06.638189 3383 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 04:56:06.671880 kubelet[3383]: I0909 04:56:06.671165 3383 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 04:56:06.671880 kubelet[3383]: I0909 04:56:06.671180 3383 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 04:56:06.671880 kubelet[3383]: I0909 04:56:06.671200 3383 state_mem.go:36] "Initialized new in-memory state store" Sep 9 04:56:06.671880 kubelet[3383]: I0909 04:56:06.671326 3383 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 04:56:06.671880 kubelet[3383]: I0909 04:56:06.671334 3383 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 04:56:06.671880 kubelet[3383]: I0909 04:56:06.671348 3383 policy_none.go:49] "None policy: Start" Sep 9 04:56:06.671880 kubelet[3383]: I0909 04:56:06.671355 3383 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 04:56:06.671880 kubelet[3383]: I0909 04:56:06.671366 3383 state_mem.go:35] "Initializing new in-memory state store" Sep 9 04:56:06.671880 kubelet[3383]: I0909 04:56:06.671432 3383 state_mem.go:75] "Updated machine memory state" Sep 9 04:56:06.674945 kubelet[3383]: I0909 04:56:06.674921 3383 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 04:56:06.675140 kubelet[3383]: I0909 04:56:06.675122 3383 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 04:56:06.675636 kubelet[3383]: I0909 04:56:06.675591 3383 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 04:56:06.677418 kubelet[3383]: E0909 04:56:06.676424 3383 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 04:56:06.677418 kubelet[3383]: I0909 04:56:06.676495 3383 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 04:56:06.739094 kubelet[3383]: I0909 04:56:06.739058 3383 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4452.0.0-n-087888047c" Sep 9 04:56:06.739346 kubelet[3383]: I0909 04:56:06.739150 3383 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4452.0.0-n-087888047c" Sep 9 04:56:06.739442 kubelet[3383]: I0909 04:56:06.739229 3383 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4452.0.0-n-087888047c" Sep 9 04:56:06.749175 kubelet[3383]: W0909 04:56:06.749151 3383 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 04:56:06.749354 kubelet[3383]: E0909 04:56:06.749337 3383 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4452.0.0-n-087888047c\" already exists" pod="kube-system/kube-scheduler-ci-4452.0.0-n-087888047c" Sep 9 04:56:06.750013 kubelet[3383]: W0909 04:56:06.749932 3383 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 04:56:06.750205 kubelet[3383]: E0909 04:56:06.750180 3383 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4452.0.0-n-087888047c\" already exists" pod="kube-system/kube-controller-manager-ci-4452.0.0-n-087888047c" Sep 9 04:56:06.750376 kubelet[3383]: W0909 04:56:06.750360 3383 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 04:56:06.750407 kubelet[3383]: E0909 04:56:06.750390 3383 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4452.0.0-n-087888047c\" already exists" pod="kube-system/kube-apiserver-ci-4452.0.0-n-087888047c" Sep 9 04:56:06.786162 kubelet[3383]: I0909 04:56:06.786138 3383 kubelet_node_status.go:75] "Attempting to register node" node="ci-4452.0.0-n-087888047c" Sep 9 04:56:06.796559 kubelet[3383]: I0909 04:56:06.796522 3383 kubelet_node_status.go:124] "Node was previously registered" node="ci-4452.0.0-n-087888047c" Sep 9 04:56:06.796672 kubelet[3383]: I0909 04:56:06.796604 3383 kubelet_node_status.go:78] "Successfully registered node" node="ci-4452.0.0-n-087888047c" Sep 9 04:56:06.830484 kubelet[3383]: I0909 04:56:06.830311 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/62e51bf24f403d84420ff87ce0bb4118-kubeconfig\") pod \"kube-scheduler-ci-4452.0.0-n-087888047c\" (UID: \"62e51bf24f403d84420ff87ce0bb4118\") " pod="kube-system/kube-scheduler-ci-4452.0.0-n-087888047c" Sep 9 04:56:06.830484 kubelet[3383]: I0909 04:56:06.830356 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a64348e0a8591af95fa22648035f3d69-k8s-certs\") pod \"kube-apiserver-ci-4452.0.0-n-087888047c\" (UID: \"a64348e0a8591af95fa22648035f3d69\") " pod="kube-system/kube-apiserver-ci-4452.0.0-n-087888047c" Sep 9 04:56:06.830484 kubelet[3383]: I0909 04:56:06.830370 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ac5bd3042885e29d49b6a54a4a80da3d-ca-certs\") pod \"kube-controller-manager-ci-4452.0.0-n-087888047c\" (UID: \"ac5bd3042885e29d49b6a54a4a80da3d\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-087888047c" Sep 9 04:56:06.830484 kubelet[3383]: I0909 04:56:06.830388 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ac5bd3042885e29d49b6a54a4a80da3d-kubeconfig\") pod \"kube-controller-manager-ci-4452.0.0-n-087888047c\" (UID: \"ac5bd3042885e29d49b6a54a4a80da3d\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-087888047c" Sep 9 04:56:06.830484 kubelet[3383]: I0909 04:56:06.830403 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ac5bd3042885e29d49b6a54a4a80da3d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4452.0.0-n-087888047c\" (UID: \"ac5bd3042885e29d49b6a54a4a80da3d\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-087888047c" Sep 9 04:56:06.830693 kubelet[3383]: I0909 04:56:06.830415 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a64348e0a8591af95fa22648035f3d69-ca-certs\") pod \"kube-apiserver-ci-4452.0.0-n-087888047c\" (UID: \"a64348e0a8591af95fa22648035f3d69\") " pod="kube-system/kube-apiserver-ci-4452.0.0-n-087888047c" Sep 9 04:56:06.830693 kubelet[3383]: I0909 04:56:06.830425 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a64348e0a8591af95fa22648035f3d69-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4452.0.0-n-087888047c\" (UID: \"a64348e0a8591af95fa22648035f3d69\") " pod="kube-system/kube-apiserver-ci-4452.0.0-n-087888047c" Sep 9 04:56:06.830693 kubelet[3383]: I0909 04:56:06.830437 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ac5bd3042885e29d49b6a54a4a80da3d-flexvolume-dir\") pod \"kube-controller-manager-ci-4452.0.0-n-087888047c\" (UID: \"ac5bd3042885e29d49b6a54a4a80da3d\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-087888047c" Sep 9 04:56:06.830693 kubelet[3383]: I0909 04:56:06.830446 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ac5bd3042885e29d49b6a54a4a80da3d-k8s-certs\") pod \"kube-controller-manager-ci-4452.0.0-n-087888047c\" (UID: \"ac5bd3042885e29d49b6a54a4a80da3d\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-087888047c" Sep 9 04:56:07.614290 kubelet[3383]: I0909 04:56:07.614240 3383 apiserver.go:52] "Watching apiserver" Sep 9 04:56:07.629151 kubelet[3383]: I0909 04:56:07.628984 3383 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 04:56:07.660644 kubelet[3383]: I0909 04:56:07.660141 3383 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4452.0.0-n-087888047c" Sep 9 04:56:07.660981 kubelet[3383]: I0909 04:56:07.660651 3383 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4452.0.0-n-087888047c" Sep 9 04:56:07.672320 kubelet[3383]: W0909 04:56:07.672013 3383 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 04:56:07.672320 kubelet[3383]: E0909 04:56:07.672058 3383 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4452.0.0-n-087888047c\" already exists" pod="kube-system/kube-scheduler-ci-4452.0.0-n-087888047c" Sep 9 04:56:07.672924 kubelet[3383]: W0909 04:56:07.672905 3383 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 04:56:07.673047 kubelet[3383]: E0909 04:56:07.673033 3383 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4452.0.0-n-087888047c\" already exists" pod="kube-system/kube-apiserver-ci-4452.0.0-n-087888047c" Sep 9 04:56:07.695646 kubelet[3383]: I0909 04:56:07.695585 3383 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4452.0.0-n-087888047c" podStartSLOduration=2.695569383 podStartE2EDuration="2.695569383s" podCreationTimestamp="2025-09-09 04:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 04:56:07.686036563 +0000 UTC m=+1.192987512" watchObservedRunningTime="2025-09-09 04:56:07.695569383 +0000 UTC m=+1.202520324" Sep 9 04:56:07.706399 kubelet[3383]: I0909 04:56:07.706265 3383 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4452.0.0-n-087888047c" podStartSLOduration=2.706250678 podStartE2EDuration="2.706250678s" podCreationTimestamp="2025-09-09 04:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 04:56:07.696071694 +0000 UTC m=+1.203022643" watchObservedRunningTime="2025-09-09 04:56:07.706250678 +0000 UTC m=+1.213201627" Sep 9 04:56:07.708095 kubelet[3383]: I0909 04:56:07.708058 3383 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4452.0.0-n-087888047c" podStartSLOduration=2.7080459980000002 podStartE2EDuration="2.708045998s" podCreationTimestamp="2025-09-09 04:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 04:56:07.706670611 +0000 UTC m=+1.213621552" watchObservedRunningTime="2025-09-09 04:56:07.708045998 +0000 UTC m=+1.214997083" Sep 9 04:56:11.635436 kubelet[3383]: I0909 04:56:11.635384 3383 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 04:56:11.636605 containerd[1866]: time="2025-09-09T04:56:11.636113219Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 04:56:11.636829 kubelet[3383]: I0909 04:56:11.636390 3383 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 04:56:12.487337 systemd[1]: Created slice kubepods-besteffort-podb62b3508_c1e3_425f_8757_43b13a1fa20f.slice - libcontainer container kubepods-besteffort-podb62b3508_c1e3_425f_8757_43b13a1fa20f.slice. Sep 9 04:56:12.567172 kubelet[3383]: I0909 04:56:12.567078 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b62b3508-c1e3-425f-8757-43b13a1fa20f-kube-proxy\") pod \"kube-proxy-9jlw7\" (UID: \"b62b3508-c1e3-425f-8757-43b13a1fa20f\") " pod="kube-system/kube-proxy-9jlw7" Sep 9 04:56:12.567172 kubelet[3383]: I0909 04:56:12.567109 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b62b3508-c1e3-425f-8757-43b13a1fa20f-xtables-lock\") pod \"kube-proxy-9jlw7\" (UID: \"b62b3508-c1e3-425f-8757-43b13a1fa20f\") " pod="kube-system/kube-proxy-9jlw7" Sep 9 04:56:12.567172 kubelet[3383]: I0909 04:56:12.567119 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b62b3508-c1e3-425f-8757-43b13a1fa20f-lib-modules\") pod \"kube-proxy-9jlw7\" (UID: \"b62b3508-c1e3-425f-8757-43b13a1fa20f\") " pod="kube-system/kube-proxy-9jlw7" Sep 9 04:56:12.567172 kubelet[3383]: I0909 04:56:12.567129 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx25l\" (UniqueName: \"kubernetes.io/projected/b62b3508-c1e3-425f-8757-43b13a1fa20f-kube-api-access-fx25l\") pod \"kube-proxy-9jlw7\" (UID: \"b62b3508-c1e3-425f-8757-43b13a1fa20f\") " pod="kube-system/kube-proxy-9jlw7" Sep 9 04:56:12.758803 systemd[1]: Created slice kubepods-besteffort-poda88ffa28_a71d_417a_a732_0a615e65fff0.slice - libcontainer container kubepods-besteffort-poda88ffa28_a71d_417a_a732_0a615e65fff0.slice. Sep 9 04:56:12.767679 kubelet[3383]: I0909 04:56:12.767650 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7kjs\" (UniqueName: \"kubernetes.io/projected/a88ffa28-a71d-417a-a732-0a615e65fff0-kube-api-access-q7kjs\") pod \"tigera-operator-755d956888-8mgmq\" (UID: \"a88ffa28-a71d-417a-a732-0a615e65fff0\") " pod="tigera-operator/tigera-operator-755d956888-8mgmq" Sep 9 04:56:12.767968 kubelet[3383]: I0909 04:56:12.767681 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a88ffa28-a71d-417a-a732-0a615e65fff0-var-lib-calico\") pod \"tigera-operator-755d956888-8mgmq\" (UID: \"a88ffa28-a71d-417a-a732-0a615e65fff0\") " pod="tigera-operator/tigera-operator-755d956888-8mgmq" Sep 9 04:56:12.793727 containerd[1866]: time="2025-09-09T04:56:12.793664782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9jlw7,Uid:b62b3508-c1e3-425f-8757-43b13a1fa20f,Namespace:kube-system,Attempt:0,}" Sep 9 04:56:12.841707 containerd[1866]: time="2025-09-09T04:56:12.841439785Z" level=info msg="connecting to shim 00587987fe9bb0634bc3b83a5b93c7939263c8eacf88afe7d1efcb2e20b4ecb3" address="unix:///run/containerd/s/3391c148436c2f4c02cd5cdd56afa1b12b06fca2876ae9fba36502be90a4c9a3" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:56:12.862136 systemd[1]: Started cri-containerd-00587987fe9bb0634bc3b83a5b93c7939263c8eacf88afe7d1efcb2e20b4ecb3.scope - libcontainer container 00587987fe9bb0634bc3b83a5b93c7939263c8eacf88afe7d1efcb2e20b4ecb3. Sep 9 04:56:12.893664 containerd[1866]: time="2025-09-09T04:56:12.893619222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9jlw7,Uid:b62b3508-c1e3-425f-8757-43b13a1fa20f,Namespace:kube-system,Attempt:0,} returns sandbox id \"00587987fe9bb0634bc3b83a5b93c7939263c8eacf88afe7d1efcb2e20b4ecb3\"" Sep 9 04:56:12.897643 containerd[1866]: time="2025-09-09T04:56:12.897612522Z" level=info msg="CreateContainer within sandbox \"00587987fe9bb0634bc3b83a5b93c7939263c8eacf88afe7d1efcb2e20b4ecb3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 04:56:12.925534 containerd[1866]: time="2025-09-09T04:56:12.925323686Z" level=info msg="Container cbc4e9dc187632291d74fc2a52313171510ad110196ef4fecfa2b6183e4c7bab: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:56:12.948590 containerd[1866]: time="2025-09-09T04:56:12.948542942Z" level=info msg="CreateContainer within sandbox \"00587987fe9bb0634bc3b83a5b93c7939263c8eacf88afe7d1efcb2e20b4ecb3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cbc4e9dc187632291d74fc2a52313171510ad110196ef4fecfa2b6183e4c7bab\"" Sep 9 04:56:12.950579 containerd[1866]: time="2025-09-09T04:56:12.950546816Z" level=info msg="StartContainer for \"cbc4e9dc187632291d74fc2a52313171510ad110196ef4fecfa2b6183e4c7bab\"" Sep 9 04:56:12.951694 containerd[1866]: time="2025-09-09T04:56:12.951669630Z" level=info msg="connecting to shim cbc4e9dc187632291d74fc2a52313171510ad110196ef4fecfa2b6183e4c7bab" address="unix:///run/containerd/s/3391c148436c2f4c02cd5cdd56afa1b12b06fca2876ae9fba36502be90a4c9a3" protocol=ttrpc version=3 Sep 9 04:56:12.971120 systemd[1]: Started cri-containerd-cbc4e9dc187632291d74fc2a52313171510ad110196ef4fecfa2b6183e4c7bab.scope - libcontainer container cbc4e9dc187632291d74fc2a52313171510ad110196ef4fecfa2b6183e4c7bab. Sep 9 04:56:13.006267 containerd[1866]: time="2025-09-09T04:56:13.006132774Z" level=info msg="StartContainer for \"cbc4e9dc187632291d74fc2a52313171510ad110196ef4fecfa2b6183e4c7bab\" returns successfully" Sep 9 04:56:13.063516 containerd[1866]: time="2025-09-09T04:56:13.063403212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-8mgmq,Uid:a88ffa28-a71d-417a-a732-0a615e65fff0,Namespace:tigera-operator,Attempt:0,}" Sep 9 04:56:13.107360 containerd[1866]: time="2025-09-09T04:56:13.107216913Z" level=info msg="connecting to shim b42f083660aa85730571bd3ef0c4bf598cd08affee24c74adc4b8ae6e8a62e30" address="unix:///run/containerd/s/3d9df4d64256dcfc3873e7da66b1aa1aab90eb18ecc78105050eb2a67bd75942" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:56:13.139120 systemd[1]: Started cri-containerd-b42f083660aa85730571bd3ef0c4bf598cd08affee24c74adc4b8ae6e8a62e30.scope - libcontainer container b42f083660aa85730571bd3ef0c4bf598cd08affee24c74adc4b8ae6e8a62e30. Sep 9 04:56:13.168367 containerd[1866]: time="2025-09-09T04:56:13.168326829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-8mgmq,Uid:a88ffa28-a71d-417a-a732-0a615e65fff0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b42f083660aa85730571bd3ef0c4bf598cd08affee24c74adc4b8ae6e8a62e30\"" Sep 9 04:56:13.170590 containerd[1866]: time="2025-09-09T04:56:13.170561691Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 9 04:56:13.683380 kubelet[3383]: I0909 04:56:13.683332 3383 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9jlw7" podStartSLOduration=1.683317948 podStartE2EDuration="1.683317948s" podCreationTimestamp="2025-09-09 04:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 04:56:13.682832052 +0000 UTC m=+7.189782993" watchObservedRunningTime="2025-09-09 04:56:13.683317948 +0000 UTC m=+7.190268889" Sep 9 04:56:14.873006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2824236473.mount: Deactivated successfully. Sep 9 04:56:15.236368 containerd[1866]: time="2025-09-09T04:56:15.236256859Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:15.240114 containerd[1866]: time="2025-09-09T04:56:15.240084612Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 9 04:56:15.245267 containerd[1866]: time="2025-09-09T04:56:15.245240167Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:15.250061 containerd[1866]: time="2025-09-09T04:56:15.250032247Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:15.250640 containerd[1866]: time="2025-09-09T04:56:15.250295623Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.079701427s" Sep 9 04:56:15.250640 containerd[1866]: time="2025-09-09T04:56:15.250317968Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 9 04:56:15.253285 containerd[1866]: time="2025-09-09T04:56:15.253209579Z" level=info msg="CreateContainer within sandbox \"b42f083660aa85730571bd3ef0c4bf598cd08affee24c74adc4b8ae6e8a62e30\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 9 04:56:15.275738 containerd[1866]: time="2025-09-09T04:56:15.275704618Z" level=info msg="Container 9ff9cebb6ae49bcd4f8bda3bb1a337cd5dd5f254769c0aad799fef73d7fbd27e: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:56:15.278076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1544452071.mount: Deactivated successfully. Sep 9 04:56:15.294009 containerd[1866]: time="2025-09-09T04:56:15.293962019Z" level=info msg="CreateContainer within sandbox \"b42f083660aa85730571bd3ef0c4bf598cd08affee24c74adc4b8ae6e8a62e30\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9ff9cebb6ae49bcd4f8bda3bb1a337cd5dd5f254769c0aad799fef73d7fbd27e\"" Sep 9 04:56:15.296549 containerd[1866]: time="2025-09-09T04:56:15.295754180Z" level=info msg="StartContainer for \"9ff9cebb6ae49bcd4f8bda3bb1a337cd5dd5f254769c0aad799fef73d7fbd27e\"" Sep 9 04:56:15.296549 containerd[1866]: time="2025-09-09T04:56:15.296453362Z" level=info msg="connecting to shim 9ff9cebb6ae49bcd4f8bda3bb1a337cd5dd5f254769c0aad799fef73d7fbd27e" address="unix:///run/containerd/s/3d9df4d64256dcfc3873e7da66b1aa1aab90eb18ecc78105050eb2a67bd75942" protocol=ttrpc version=3 Sep 9 04:56:15.316128 systemd[1]: Started cri-containerd-9ff9cebb6ae49bcd4f8bda3bb1a337cd5dd5f254769c0aad799fef73d7fbd27e.scope - libcontainer container 9ff9cebb6ae49bcd4f8bda3bb1a337cd5dd5f254769c0aad799fef73d7fbd27e. Sep 9 04:56:15.340745 containerd[1866]: time="2025-09-09T04:56:15.340699601Z" level=info msg="StartContainer for \"9ff9cebb6ae49bcd4f8bda3bb1a337cd5dd5f254769c0aad799fef73d7fbd27e\" returns successfully" Sep 9 04:56:15.687943 kubelet[3383]: I0909 04:56:15.687340 3383 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-8mgmq" podStartSLOduration=1.606002448 podStartE2EDuration="3.687326518s" podCreationTimestamp="2025-09-09 04:56:12 +0000 UTC" firstStartedPulling="2025-09-09 04:56:13.169552956 +0000 UTC m=+6.676503897" lastFinishedPulling="2025-09-09 04:56:15.250877026 +0000 UTC m=+8.757827967" observedRunningTime="2025-09-09 04:56:15.687308981 +0000 UTC m=+9.194259930" watchObservedRunningTime="2025-09-09 04:56:15.687326518 +0000 UTC m=+9.194277459" Sep 9 04:56:20.517067 sudo[2362]: pam_unix(sudo:session): session closed for user root Sep 9 04:56:20.587780 sshd[2361]: Connection closed by 10.200.16.10 port 37518 Sep 9 04:56:20.588340 sshd-session[2358]: pam_unix(sshd:session): session closed for user core Sep 9 04:56:20.593404 systemd-logind[1850]: Session 9 logged out. Waiting for processes to exit. Sep 9 04:56:20.594667 systemd[1]: sshd@6-10.200.20.39:22-10.200.16.10:37518.service: Deactivated successfully. Sep 9 04:56:20.599800 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 04:56:20.600220 systemd[1]: session-9.scope: Consumed 3.311s CPU time, 221.6M memory peak. Sep 9 04:56:20.602959 systemd-logind[1850]: Removed session 9. Sep 9 04:56:25.351299 systemd[1]: Created slice kubepods-besteffort-pod75855ca4_cae2_4eb4_a2bf_c2ae0632b99f.slice - libcontainer container kubepods-besteffort-pod75855ca4_cae2_4eb4_a2bf_c2ae0632b99f.slice. Sep 9 04:56:25.439754 kubelet[3383]: I0909 04:56:25.439579 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75855ca4-cae2-4eb4-a2bf-c2ae0632b99f-tigera-ca-bundle\") pod \"calico-typha-b9d78d7d-g468l\" (UID: \"75855ca4-cae2-4eb4-a2bf-c2ae0632b99f\") " pod="calico-system/calico-typha-b9d78d7d-g468l" Sep 9 04:56:25.439754 kubelet[3383]: I0909 04:56:25.439619 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tq6k\" (UniqueName: \"kubernetes.io/projected/75855ca4-cae2-4eb4-a2bf-c2ae0632b99f-kube-api-access-4tq6k\") pod \"calico-typha-b9d78d7d-g468l\" (UID: \"75855ca4-cae2-4eb4-a2bf-c2ae0632b99f\") " pod="calico-system/calico-typha-b9d78d7d-g468l" Sep 9 04:56:25.439754 kubelet[3383]: I0909 04:56:25.439639 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/75855ca4-cae2-4eb4-a2bf-c2ae0632b99f-typha-certs\") pod \"calico-typha-b9d78d7d-g468l\" (UID: \"75855ca4-cae2-4eb4-a2bf-c2ae0632b99f\") " pod="calico-system/calico-typha-b9d78d7d-g468l" Sep 9 04:56:25.470310 systemd[1]: Created slice kubepods-besteffort-pod21c9e067_ca69_425c_b811_409c33a70285.slice - libcontainer container kubepods-besteffort-pod21c9e067_ca69_425c_b811_409c33a70285.slice. Sep 9 04:56:25.540897 kubelet[3383]: I0909 04:56:25.540799 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/21c9e067-ca69-425c-b811-409c33a70285-cni-bin-dir\") pod \"calico-node-8pt4f\" (UID: \"21c9e067-ca69-425c-b811-409c33a70285\") " pod="calico-system/calico-node-8pt4f" Sep 9 04:56:25.540897 kubelet[3383]: I0909 04:56:25.540833 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/21c9e067-ca69-425c-b811-409c33a70285-cni-log-dir\") pod \"calico-node-8pt4f\" (UID: \"21c9e067-ca69-425c-b811-409c33a70285\") " pod="calico-system/calico-node-8pt4f" Sep 9 04:56:25.542446 kubelet[3383]: I0909 04:56:25.541128 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/21c9e067-ca69-425c-b811-409c33a70285-cni-net-dir\") pod \"calico-node-8pt4f\" (UID: \"21c9e067-ca69-425c-b811-409c33a70285\") " pod="calico-system/calico-node-8pt4f" Sep 9 04:56:25.542446 kubelet[3383]: I0909 04:56:25.541162 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21c9e067-ca69-425c-b811-409c33a70285-tigera-ca-bundle\") pod \"calico-node-8pt4f\" (UID: \"21c9e067-ca69-425c-b811-409c33a70285\") " pod="calico-system/calico-node-8pt4f" Sep 9 04:56:25.542446 kubelet[3383]: I0909 04:56:25.541187 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/21c9e067-ca69-425c-b811-409c33a70285-flexvol-driver-host\") pod \"calico-node-8pt4f\" (UID: \"21c9e067-ca69-425c-b811-409c33a70285\") " pod="calico-system/calico-node-8pt4f" Sep 9 04:56:25.542446 kubelet[3383]: I0909 04:56:25.541202 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/21c9e067-ca69-425c-b811-409c33a70285-var-lib-calico\") pod \"calico-node-8pt4f\" (UID: \"21c9e067-ca69-425c-b811-409c33a70285\") " pod="calico-system/calico-node-8pt4f" Sep 9 04:56:25.542446 kubelet[3383]: I0909 04:56:25.541220 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/21c9e067-ca69-425c-b811-409c33a70285-policysync\") pod \"calico-node-8pt4f\" (UID: \"21c9e067-ca69-425c-b811-409c33a70285\") " pod="calico-system/calico-node-8pt4f" Sep 9 04:56:25.542529 kubelet[3383]: I0909 04:56:25.541230 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21c9e067-ca69-425c-b811-409c33a70285-xtables-lock\") pod \"calico-node-8pt4f\" (UID: \"21c9e067-ca69-425c-b811-409c33a70285\") " pod="calico-system/calico-node-8pt4f" Sep 9 04:56:25.542529 kubelet[3383]: I0909 04:56:25.541361 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/21c9e067-ca69-425c-b811-409c33a70285-var-run-calico\") pod \"calico-node-8pt4f\" (UID: \"21c9e067-ca69-425c-b811-409c33a70285\") " pod="calico-system/calico-node-8pt4f" Sep 9 04:56:25.542529 kubelet[3383]: I0909 04:56:25.541377 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21c9e067-ca69-425c-b811-409c33a70285-lib-modules\") pod \"calico-node-8pt4f\" (UID: \"21c9e067-ca69-425c-b811-409c33a70285\") " pod="calico-system/calico-node-8pt4f" Sep 9 04:56:25.542529 kubelet[3383]: I0909 04:56:25.541394 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7c5w\" (UniqueName: \"kubernetes.io/projected/21c9e067-ca69-425c-b811-409c33a70285-kube-api-access-m7c5w\") pod \"calico-node-8pt4f\" (UID: \"21c9e067-ca69-425c-b811-409c33a70285\") " pod="calico-system/calico-node-8pt4f" Sep 9 04:56:25.542529 kubelet[3383]: I0909 04:56:25.541416 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/21c9e067-ca69-425c-b811-409c33a70285-node-certs\") pod \"calico-node-8pt4f\" (UID: \"21c9e067-ca69-425c-b811-409c33a70285\") " pod="calico-system/calico-node-8pt4f" Sep 9 04:56:25.610711 kubelet[3383]: E0909 04:56:25.610589 3383 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-blr4m" podUID="ef65cad1-f38f-4d74-aff6-26558cee563c" Sep 9 04:56:25.643344 kubelet[3383]: I0909 04:56:25.642458 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef65cad1-f38f-4d74-aff6-26558cee563c-kubelet-dir\") pod \"csi-node-driver-blr4m\" (UID: \"ef65cad1-f38f-4d74-aff6-26558cee563c\") " pod="calico-system/csi-node-driver-blr4m" Sep 9 04:56:25.643344 kubelet[3383]: I0909 04:56:25.642509 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ef65cad1-f38f-4d74-aff6-26558cee563c-varrun\") pod \"csi-node-driver-blr4m\" (UID: \"ef65cad1-f38f-4d74-aff6-26558cee563c\") " pod="calico-system/csi-node-driver-blr4m" Sep 9 04:56:25.643344 kubelet[3383]: I0909 04:56:25.642523 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcsn2\" (UniqueName: \"kubernetes.io/projected/ef65cad1-f38f-4d74-aff6-26558cee563c-kube-api-access-lcsn2\") pod \"csi-node-driver-blr4m\" (UID: \"ef65cad1-f38f-4d74-aff6-26558cee563c\") " pod="calico-system/csi-node-driver-blr4m" Sep 9 04:56:25.643344 kubelet[3383]: I0909 04:56:25.642564 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ef65cad1-f38f-4d74-aff6-26558cee563c-socket-dir\") pod \"csi-node-driver-blr4m\" (UID: \"ef65cad1-f38f-4d74-aff6-26558cee563c\") " pod="calico-system/csi-node-driver-blr4m" Sep 9 04:56:25.643344 kubelet[3383]: I0909 04:56:25.642582 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ef65cad1-f38f-4d74-aff6-26558cee563c-registration-dir\") pod \"csi-node-driver-blr4m\" (UID: \"ef65cad1-f38f-4d74-aff6-26558cee563c\") " pod="calico-system/csi-node-driver-blr4m" Sep 9 04:56:25.646899 kubelet[3383]: E0909 04:56:25.646871 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:25.646899 kubelet[3383]: W0909 04:56:25.646891 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:25.647423 kubelet[3383]: E0909 04:56:25.646911 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:25.653154 kubelet[3383]: E0909 04:56:25.652874 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:25.653459 kubelet[3383]: W0909 04:56:25.653332 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:25.653459 kubelet[3383]: E0909 04:56:25.653419 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:25.656537 containerd[1866]: time="2025-09-09T04:56:25.656424785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b9d78d7d-g468l,Uid:75855ca4-cae2-4eb4-a2bf-c2ae0632b99f,Namespace:calico-system,Attempt:0,}" Sep 9 04:56:25.662781 kubelet[3383]: E0909 04:56:25.662748 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:25.662781 kubelet[3383]: W0909 04:56:25.662763 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:25.662952 kubelet[3383]: E0909 04:56:25.662870 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:25.712591 containerd[1866]: time="2025-09-09T04:56:25.712465030Z" level=info msg="connecting to shim a3f5c4b090e8753731363f6074e723afb710923882cf21ada0e18da925a1c3ba" address="unix:///run/containerd/s/8ff0858b05ae440b08d41f285c7d4f1712c185f64f859276fdf144c31196fb76" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:56:25.742231 systemd[1]: Started cri-containerd-a3f5c4b090e8753731363f6074e723afb710923882cf21ada0e18da925a1c3ba.scope - libcontainer container a3f5c4b090e8753731363f6074e723afb710923882cf21ada0e18da925a1c3ba. Sep 9 04:56:25.743695 kubelet[3383]: E0909 04:56:25.743524 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:25.743695 kubelet[3383]: W0909 04:56:25.743543 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:25.743695 kubelet[3383]: E0909 04:56:25.743562 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:25.743900 kubelet[3383]: E0909 04:56:25.743838 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:25.743900 kubelet[3383]: W0909 04:56:25.743849 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:25.743900 kubelet[3383]: E0909 04:56:25.743875 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:25.744257 kubelet[3383]: E0909 04:56:25.744239 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:25.744257 kubelet[3383]: W0909 04:56:25.744252 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:25.744257 kubelet[3383]: E0909 04:56:25.744265 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:25.744801 kubelet[3383]: E0909 04:56:25.744780 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:25.744801 kubelet[3383]: W0909 04:56:25.744796 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:25.744801 kubelet[3383]: E0909 04:56:25.744811 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:25.745484 kubelet[3383]: E0909 04:56:25.745212 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:25.745484 kubelet[3383]: W0909 04:56:25.745224 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:25.745484 kubelet[3383]: E0909 04:56:25.745243 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:25.745812 kubelet[3383]: E0909 04:56:25.745765 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:25.746028 kubelet[3383]: W0909 04:56:25.746011 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:25.746135 kubelet[3383]: E0909 04:56:25.746108 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:25.746431 kubelet[3383]: E0909 04:56:25.746420 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:25.746601 kubelet[3383]: W0909 04:56:25.746494 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:25.746601 kubelet[3383]: E0909 04:56:25.746534 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:25.746883 kubelet[3383]: E0909 04:56:25.746744 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:25.747051 kubelet[3383]: W0909 04:56:25.746945 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:25.747051 kubelet[3383]: E0909 04:56:25.746981 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:25.747197 kubelet[3383]: E0909 04:56:25.747177 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:25.747288 kubelet[3383]: W0909 04:56:25.747271 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:25.747535 kubelet[3383]: E0909 04:56:25.747513 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:25.748307 kubelet[3383]: E0909 04:56:25.748211 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:25.748486 kubelet[3383]: W0909 04:56:25.748470 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:25.748603 kubelet[3383]: E0909 04:56:25.748580 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:25.748790 kubelet[3383]: E0909 04:56:25.748778 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:25.748934 kubelet[3383]: W0909 04:56:25.748825 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:25.748934 kubelet[3383]: E0909 04:56:25.748855 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:25.749214 kubelet[3383]: E0909 04:56:25.749117 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:25.749214 kubelet[3383]: W0909 04:56:25.749138 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:25.749414 kubelet[3383]: E0909 04:56:25.749404 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:25.749555 kubelet[3383]: W0909 04:56:25.749455 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:25.749555 kubelet[3383]: E0909 04:56:25.749511 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:25.749555 kubelet[3383]: E0909 04:56:25.749531 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:25.749760 kubelet[3383]: E0909 04:56:25.749738 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:25.749893 kubelet[3383]: W0909 04:56:25.749807 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:25.749893 kubelet[3383]: E0909 04:56:25.749869 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:25.750194 kubelet[3383]: E0909 04:56:25.750131 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:25.750194 kubelet[3383]: W0909 04:56:25.750142 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:25.750424 kubelet[3383]: E0909 04:56:25.750369 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:25.750580 kubelet[3383]: E0909 04:56:25.750392 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:25.750580 kubelet[3383]: W0909 04:56:25.750484 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:25.750580 kubelet[3383]: E0909 04:56:25.750515 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:25.750881 kubelet[3383]: E0909 04:56:25.750784 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:25.750881 kubelet[3383]: W0909 04:56:25.750811 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:25.750881 kubelet[3383]: E0909 04:56:25.750833 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:25.751159 kubelet[3383]: E0909 04:56:25.751108 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:25.751159 kubelet[3383]: W0909 04:56:25.751118 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:25.751159 kubelet[3383]: E0909 04:56:25.751140 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:25.751607 kubelet[3383]: E0909 04:56:25.751499 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:25.751607 kubelet[3383]: W0909 04:56:25.751520 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:25.751607 kubelet[3383]: E0909 04:56:25.751541 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:25.751745 kubelet[3383]: E0909 04:56:25.751735 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:25.751878 kubelet[3383]: W0909 04:56:25.751763 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:25.751878 kubelet[3383]: E0909 04:56:25.751786 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:25.752090 kubelet[3383]: E0909 04:56:25.752055 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:25.752090 kubelet[3383]: W0909 04:56:25.752065 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:25.752528 kubelet[3383]: E0909 04:56:25.752426 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:25.752612 kubelet[3383]: E0909 04:56:25.752590 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:25.752612 kubelet[3383]: W0909 04:56:25.752605 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:25.752662 kubelet[3383]: E0909 04:56:25.752620 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:25.752911 kubelet[3383]: E0909 04:56:25.752892 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:25.753282 kubelet[3383]: W0909 04:56:25.753071 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:25.753282 kubelet[3383]: E0909 04:56:25.753093 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:25.753949 kubelet[3383]: E0909 04:56:25.753932 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:25.754248 kubelet[3383]: W0909 04:56:25.754029 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:25.754248 kubelet[3383]: E0909 04:56:25.754045 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:25.754713 kubelet[3383]: E0909 04:56:25.754682 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:25.754851 kubelet[3383]: W0909 04:56:25.754800 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:25.754851 kubelet[3383]: E0909 04:56:25.754826 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:25.761128 kubelet[3383]: E0909 04:56:25.761080 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:25.761128 kubelet[3383]: W0909 04:56:25.761093 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:25.761128 kubelet[3383]: E0909 04:56:25.761104 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:25.774382 containerd[1866]: time="2025-09-09T04:56:25.774045025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8pt4f,Uid:21c9e067-ca69-425c-b811-409c33a70285,Namespace:calico-system,Attempt:0,}" Sep 9 04:56:25.800025 containerd[1866]: time="2025-09-09T04:56:25.799935364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b9d78d7d-g468l,Uid:75855ca4-cae2-4eb4-a2bf-c2ae0632b99f,Namespace:calico-system,Attempt:0,} returns sandbox id \"a3f5c4b090e8753731363f6074e723afb710923882cf21ada0e18da925a1c3ba\"" Sep 9 04:56:25.802116 containerd[1866]: time="2025-09-09T04:56:25.802085453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 9 04:56:25.846801 containerd[1866]: time="2025-09-09T04:56:25.846699422Z" level=info msg="connecting to shim a3cb1c2437fee0d6a7ac6e3215e919357c3cd0ed570cc544e7271ca1d02a55e9" address="unix:///run/containerd/s/4594019ecbbadff266c8994305d5bf89d1cfb196a3283fcd6aeb3d6b3f00ef39" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:56:25.876394 systemd[1]: Started cri-containerd-a3cb1c2437fee0d6a7ac6e3215e919357c3cd0ed570cc544e7271ca1d02a55e9.scope - libcontainer container a3cb1c2437fee0d6a7ac6e3215e919357c3cd0ed570cc544e7271ca1d02a55e9. Sep 9 04:56:25.910855 containerd[1866]: time="2025-09-09T04:56:25.910784643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8pt4f,Uid:21c9e067-ca69-425c-b811-409c33a70285,Namespace:calico-system,Attempt:0,} returns sandbox id \"a3cb1c2437fee0d6a7ac6e3215e919357c3cd0ed570cc544e7271ca1d02a55e9\"" Sep 9 04:56:27.156101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount799507159.mount: Deactivated successfully. Sep 9 04:56:27.546557 containerd[1866]: time="2025-09-09T04:56:27.545951145Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:27.550162 containerd[1866]: time="2025-09-09T04:56:27.550124497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 9 04:56:27.554317 containerd[1866]: time="2025-09-09T04:56:27.554293352Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:27.559808 containerd[1866]: time="2025-09-09T04:56:27.559678511Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:27.560394 containerd[1866]: time="2025-09-09T04:56:27.560367917Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.758250136s" Sep 9 04:56:27.560394 containerd[1866]: time="2025-09-09T04:56:27.560397054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 9 04:56:27.562481 containerd[1866]: time="2025-09-09T04:56:27.562129711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 04:56:27.573306 containerd[1866]: time="2025-09-09T04:56:27.573243335Z" level=info msg="CreateContainer within sandbox \"a3f5c4b090e8753731363f6074e723afb710923882cf21ada0e18da925a1c3ba\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 9 04:56:27.599224 containerd[1866]: time="2025-09-09T04:56:27.598504972Z" level=info msg="Container 7522fb2ea366f8ac87e9b2e4ddc46b5c8d1326a219d8d7fed50457efecaad8f1: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:56:27.620352 containerd[1866]: time="2025-09-09T04:56:27.620294808Z" level=info msg="CreateContainer within sandbox \"a3f5c4b090e8753731363f6074e723afb710923882cf21ada0e18da925a1c3ba\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7522fb2ea366f8ac87e9b2e4ddc46b5c8d1326a219d8d7fed50457efecaad8f1\"" Sep 9 04:56:27.621645 containerd[1866]: time="2025-09-09T04:56:27.621594858Z" level=info msg="StartContainer for \"7522fb2ea366f8ac87e9b2e4ddc46b5c8d1326a219d8d7fed50457efecaad8f1\"" Sep 9 04:56:27.622933 containerd[1866]: time="2025-09-09T04:56:27.622884956Z" level=info msg="connecting to shim 7522fb2ea366f8ac87e9b2e4ddc46b5c8d1326a219d8d7fed50457efecaad8f1" address="unix:///run/containerd/s/8ff0858b05ae440b08d41f285c7d4f1712c185f64f859276fdf144c31196fb76" protocol=ttrpc version=3 Sep 9 04:56:27.638610 kubelet[3383]: E0909 04:56:27.638573 3383 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-blr4m" podUID="ef65cad1-f38f-4d74-aff6-26558cee563c" Sep 9 04:56:27.644238 systemd[1]: Started cri-containerd-7522fb2ea366f8ac87e9b2e4ddc46b5c8d1326a219d8d7fed50457efecaad8f1.scope - libcontainer container 7522fb2ea366f8ac87e9b2e4ddc46b5c8d1326a219d8d7fed50457efecaad8f1. Sep 9 04:56:27.682791 containerd[1866]: time="2025-09-09T04:56:27.682661121Z" level=info msg="StartContainer for \"7522fb2ea366f8ac87e9b2e4ddc46b5c8d1326a219d8d7fed50457efecaad8f1\" returns successfully" Sep 9 04:56:27.728800 kubelet[3383]: I0909 04:56:27.728738 3383 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-b9d78d7d-g468l" podStartSLOduration=0.968018829 podStartE2EDuration="2.728391702s" podCreationTimestamp="2025-09-09 04:56:25 +0000 UTC" firstStartedPulling="2025-09-09 04:56:25.800862584 +0000 UTC m=+19.307813525" lastFinishedPulling="2025-09-09 04:56:27.561235449 +0000 UTC m=+21.068186398" observedRunningTime="2025-09-09 04:56:27.728374134 +0000 UTC m=+21.235325075" watchObservedRunningTime="2025-09-09 04:56:27.728391702 +0000 UTC m=+21.235342643" Sep 9 04:56:27.742282 kubelet[3383]: E0909 04:56:27.742106 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.742282 kubelet[3383]: W0909 04:56:27.742131 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.742282 kubelet[3383]: E0909 04:56:27.742152 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:27.743339 kubelet[3383]: E0909 04:56:27.743077 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.743339 kubelet[3383]: W0909 04:56:27.743091 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.743339 kubelet[3383]: E0909 04:56:27.743124 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:27.744095 kubelet[3383]: E0909 04:56:27.743986 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.744195 kubelet[3383]: W0909 04:56:27.744180 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.744329 kubelet[3383]: E0909 04:56:27.744316 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:27.744675 kubelet[3383]: E0909 04:56:27.744637 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.745084 kubelet[3383]: W0909 04:56:27.744788 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.745084 kubelet[3383]: E0909 04:56:27.744901 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:27.746096 kubelet[3383]: E0909 04:56:27.746065 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.746285 kubelet[3383]: W0909 04:56:27.746265 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.746467 kubelet[3383]: E0909 04:56:27.746450 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:27.748112 kubelet[3383]: E0909 04:56:27.747320 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.748388 kubelet[3383]: W0909 04:56:27.748280 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.748388 kubelet[3383]: E0909 04:56:27.748302 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:27.749050 kubelet[3383]: E0909 04:56:27.748950 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.749050 kubelet[3383]: W0909 04:56:27.748961 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.749050 kubelet[3383]: E0909 04:56:27.748971 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:27.749290 kubelet[3383]: E0909 04:56:27.749279 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.749446 kubelet[3383]: W0909 04:56:27.749341 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.749446 kubelet[3383]: E0909 04:56:27.749356 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:27.750071 kubelet[3383]: E0909 04:56:27.750055 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.750071 kubelet[3383]: W0909 04:56:27.750067 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.750176 kubelet[3383]: E0909 04:56:27.750080 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:27.750443 kubelet[3383]: E0909 04:56:27.750273 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.750443 kubelet[3383]: W0909 04:56:27.750281 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.750443 kubelet[3383]: E0909 04:56:27.750290 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:27.750443 kubelet[3383]: E0909 04:56:27.750391 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.750443 kubelet[3383]: W0909 04:56:27.750397 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.750443 kubelet[3383]: E0909 04:56:27.750403 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:27.750703 kubelet[3383]: E0909 04:56:27.750693 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.750703 kubelet[3383]: W0909 04:56:27.750700 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.750785 kubelet[3383]: E0909 04:56:27.750707 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:27.750849 kubelet[3383]: E0909 04:56:27.750803 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.750849 kubelet[3383]: W0909 04:56:27.750808 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.750849 kubelet[3383]: E0909 04:56:27.750814 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:27.750941 kubelet[3383]: E0909 04:56:27.750904 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.750941 kubelet[3383]: W0909 04:56:27.750909 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.750941 kubelet[3383]: E0909 04:56:27.750913 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:27.751016 kubelet[3383]: E0909 04:56:27.750987 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.751016 kubelet[3383]: W0909 04:56:27.751007 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.751016 kubelet[3383]: E0909 04:56:27.751013 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:27.762870 kubelet[3383]: E0909 04:56:27.762656 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.763332 kubelet[3383]: W0909 04:56:27.762983 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.763332 kubelet[3383]: E0909 04:56:27.763013 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:27.763583 kubelet[3383]: E0909 04:56:27.763561 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.763694 kubelet[3383]: W0909 04:56:27.763681 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.763754 kubelet[3383]: E0909 04:56:27.763745 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:27.764049 kubelet[3383]: E0909 04:56:27.764028 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.764049 kubelet[3383]: W0909 04:56:27.764044 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.764109 kubelet[3383]: E0909 04:56:27.764061 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:27.764328 kubelet[3383]: E0909 04:56:27.764312 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.764328 kubelet[3383]: W0909 04:56:27.764325 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.764471 kubelet[3383]: E0909 04:56:27.764339 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:27.764570 kubelet[3383]: E0909 04:56:27.764544 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.764570 kubelet[3383]: W0909 04:56:27.764554 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.764570 kubelet[3383]: E0909 04:56:27.764569 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:27.764828 kubelet[3383]: E0909 04:56:27.764815 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.764828 kubelet[3383]: W0909 04:56:27.764827 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.765074 kubelet[3383]: E0909 04:56:27.764842 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:27.765522 kubelet[3383]: E0909 04:56:27.765334 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.765522 kubelet[3383]: W0909 04:56:27.765349 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.765522 kubelet[3383]: E0909 04:56:27.765366 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:27.766341 kubelet[3383]: E0909 04:56:27.766324 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.766943 kubelet[3383]: W0909 04:56:27.766877 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.766943 kubelet[3383]: E0909 04:56:27.766931 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:27.767357 kubelet[3383]: E0909 04:56:27.767298 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.767357 kubelet[3383]: W0909 04:56:27.767312 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.767357 kubelet[3383]: E0909 04:56:27.767346 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:27.768121 kubelet[3383]: E0909 04:56:27.767599 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.768121 kubelet[3383]: W0909 04:56:27.768059 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.768121 kubelet[3383]: E0909 04:56:27.768108 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:27.768569 kubelet[3383]: E0909 04:56:27.768514 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.768569 kubelet[3383]: W0909 04:56:27.768528 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.768569 kubelet[3383]: E0909 04:56:27.768560 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:27.768905 kubelet[3383]: E0909 04:56:27.768805 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.769208 kubelet[3383]: W0909 04:56:27.768987 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.769208 kubelet[3383]: E0909 04:56:27.769061 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:27.770047 kubelet[3383]: E0909 04:56:27.769887 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.770047 kubelet[3383]: W0909 04:56:27.769903 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.770047 kubelet[3383]: E0909 04:56:27.769920 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:27.770435 kubelet[3383]: E0909 04:56:27.770178 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.770435 kubelet[3383]: W0909 04:56:27.770189 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.770435 kubelet[3383]: E0909 04:56:27.770200 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:27.770886 kubelet[3383]: E0909 04:56:27.770682 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.770886 kubelet[3383]: W0909 04:56:27.770701 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.770886 kubelet[3383]: E0909 04:56:27.770717 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:27.772280 kubelet[3383]: E0909 04:56:27.772219 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.772280 kubelet[3383]: W0909 04:56:27.772233 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.772280 kubelet[3383]: E0909 04:56:27.772250 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:27.772583 kubelet[3383]: E0909 04:56:27.772480 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.772583 kubelet[3383]: W0909 04:56:27.772491 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.772583 kubelet[3383]: E0909 04:56:27.772503 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:27.772725 kubelet[3383]: E0909 04:56:27.772714 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:27.772832 kubelet[3383]: W0909 04:56:27.772771 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:27.772895 kubelet[3383]: E0909 04:56:27.772882 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.710491 kubelet[3383]: I0909 04:56:28.710453 3383 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 04:56:28.757409 kubelet[3383]: E0909 04:56:28.757286 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.757409 kubelet[3383]: W0909 04:56:28.757309 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.757409 kubelet[3383]: E0909 04:56:28.757331 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.757809 kubelet[3383]: E0909 04:56:28.757717 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.757809 kubelet[3383]: W0909 04:56:28.757729 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.757809 kubelet[3383]: E0909 04:56:28.757765 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.758157 kubelet[3383]: E0909 04:56:28.758118 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.758157 kubelet[3383]: W0909 04:56:28.758130 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.758336 kubelet[3383]: E0909 04:56:28.758140 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.758473 kubelet[3383]: E0909 04:56:28.758462 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.758537 kubelet[3383]: W0909 04:56:28.758526 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.758612 kubelet[3383]: E0909 04:56:28.758600 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.758815 kubelet[3383]: E0909 04:56:28.758805 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.758966 kubelet[3383]: W0909 04:56:28.758836 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.758966 kubelet[3383]: E0909 04:56:28.758847 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.759119 kubelet[3383]: E0909 04:56:28.759108 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.759282 kubelet[3383]: W0909 04:56:28.759206 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.759282 kubelet[3383]: E0909 04:56:28.759220 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.759519 kubelet[3383]: E0909 04:56:28.759506 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.759577 kubelet[3383]: W0909 04:56:28.759567 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.759712 kubelet[3383]: E0909 04:56:28.759623 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.759817 kubelet[3383]: E0909 04:56:28.759807 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.759967 kubelet[3383]: W0909 04:56:28.759858 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.759967 kubelet[3383]: E0909 04:56:28.759873 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.760086 kubelet[3383]: E0909 04:56:28.760076 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.760129 kubelet[3383]: W0909 04:56:28.760121 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.760267 kubelet[3383]: E0909 04:56:28.760183 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.760456 kubelet[3383]: E0909 04:56:28.760371 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.760456 kubelet[3383]: W0909 04:56:28.760382 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.760456 kubelet[3383]: E0909 04:56:28.760390 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.760589 kubelet[3383]: E0909 04:56:28.760580 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.760639 kubelet[3383]: W0909 04:56:28.760630 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.760691 kubelet[3383]: E0909 04:56:28.760679 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.760940 kubelet[3383]: E0909 04:56:28.760854 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.760940 kubelet[3383]: W0909 04:56:28.760864 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.760940 kubelet[3383]: E0909 04:56:28.760872 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.761141 kubelet[3383]: E0909 04:56:28.761130 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.761276 kubelet[3383]: W0909 04:56:28.761212 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.761276 kubelet[3383]: E0909 04:56:28.761226 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.761486 kubelet[3383]: E0909 04:56:28.761475 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.761548 kubelet[3383]: W0909 04:56:28.761539 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.761597 kubelet[3383]: E0909 04:56:28.761586 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.761845 kubelet[3383]: E0909 04:56:28.761775 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.761845 kubelet[3383]: W0909 04:56:28.761785 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.761845 kubelet[3383]: E0909 04:56:28.761793 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.771877 containerd[1866]: time="2025-09-09T04:56:28.771840511Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:28.774103 kubelet[3383]: E0909 04:56:28.774083 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.774103 kubelet[3383]: W0909 04:56:28.774097 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.774235 kubelet[3383]: E0909 04:56:28.774108 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.774305 kubelet[3383]: E0909 04:56:28.774290 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.774305 kubelet[3383]: W0909 04:56:28.774299 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.774377 kubelet[3383]: E0909 04:56:28.774312 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.774450 kubelet[3383]: E0909 04:56:28.774427 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.774450 kubelet[3383]: W0909 04:56:28.774434 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.774450 kubelet[3383]: E0909 04:56:28.774442 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.774592 kubelet[3383]: E0909 04:56:28.774578 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.774592 kubelet[3383]: W0909 04:56:28.774588 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.774661 kubelet[3383]: E0909 04:56:28.774598 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.774727 kubelet[3383]: E0909 04:56:28.774702 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.774727 kubelet[3383]: W0909 04:56:28.774708 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.774727 kubelet[3383]: E0909 04:56:28.774719 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.774834 kubelet[3383]: E0909 04:56:28.774808 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.774834 kubelet[3383]: W0909 04:56:28.774815 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.774834 kubelet[3383]: E0909 04:56:28.774823 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.774940 kubelet[3383]: E0909 04:56:28.774934 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.774940 kubelet[3383]: W0909 04:56:28.774939 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.775025 kubelet[3383]: E0909 04:56:28.774951 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.775364 containerd[1866]: time="2025-09-09T04:56:28.775334265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 9 04:56:28.775608 kubelet[3383]: E0909 04:56:28.775591 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.775608 kubelet[3383]: W0909 04:56:28.775605 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.775666 kubelet[3383]: E0909 04:56:28.775616 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.776030 kubelet[3383]: E0909 04:56:28.775996 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.776030 kubelet[3383]: W0909 04:56:28.776009 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.776030 kubelet[3383]: E0909 04:56:28.776020 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.776271 kubelet[3383]: E0909 04:56:28.776256 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.776461 kubelet[3383]: W0909 04:56:28.776267 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.776461 kubelet[3383]: E0909 04:56:28.776404 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.776461 kubelet[3383]: E0909 04:56:28.776421 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.776461 kubelet[3383]: W0909 04:56:28.776429 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.776461 kubelet[3383]: E0909 04:56:28.776448 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.776662 kubelet[3383]: E0909 04:56:28.776568 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.776662 kubelet[3383]: W0909 04:56:28.776576 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.777075 kubelet[3383]: E0909 04:56:28.776682 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.777075 kubelet[3383]: E0909 04:56:28.776799 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.777075 kubelet[3383]: W0909 04:56:28.776805 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.777075 kubelet[3383]: E0909 04:56:28.776818 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.777075 kubelet[3383]: E0909 04:56:28.776910 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.777075 kubelet[3383]: W0909 04:56:28.776915 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.777075 kubelet[3383]: E0909 04:56:28.776924 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.778129 kubelet[3383]: E0909 04:56:28.777144 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.778129 kubelet[3383]: W0909 04:56:28.777153 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.778129 kubelet[3383]: E0909 04:56:28.777165 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.778129 kubelet[3383]: E0909 04:56:28.777552 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.778129 kubelet[3383]: W0909 04:56:28.777561 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.778129 kubelet[3383]: E0909 04:56:28.777570 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.778129 kubelet[3383]: E0909 04:56:28.777827 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.778129 kubelet[3383]: W0909 04:56:28.777836 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.778129 kubelet[3383]: E0909 04:56:28.777846 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.778780 kubelet[3383]: E0909 04:56:28.778515 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 04:56:28.778780 kubelet[3383]: W0909 04:56:28.778529 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 04:56:28.778780 kubelet[3383]: E0909 04:56:28.778541 3383 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 04:56:28.781493 containerd[1866]: time="2025-09-09T04:56:28.781343980Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:28.788131 containerd[1866]: time="2025-09-09T04:56:28.787463307Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:28.788902 containerd[1866]: time="2025-09-09T04:56:28.787895529Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.225742994s" Sep 9 04:56:28.789431 containerd[1866]: time="2025-09-09T04:56:28.789399706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 9 04:56:28.792210 containerd[1866]: time="2025-09-09T04:56:28.792187852Z" level=info msg="CreateContainer within sandbox \"a3cb1c2437fee0d6a7ac6e3215e919357c3cd0ed570cc544e7271ca1d02a55e9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 04:56:28.817748 containerd[1866]: time="2025-09-09T04:56:28.817716545Z" level=info msg="Container 6027e45d64ba29a5d0110226a0e7f047ecd6009fca7b75380e4dd96435d5b2d1: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:56:28.841022 containerd[1866]: time="2025-09-09T04:56:28.839624633Z" level=info msg="CreateContainer within sandbox \"a3cb1c2437fee0d6a7ac6e3215e919357c3cd0ed570cc544e7271ca1d02a55e9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6027e45d64ba29a5d0110226a0e7f047ecd6009fca7b75380e4dd96435d5b2d1\"" Sep 9 04:56:28.842633 containerd[1866]: time="2025-09-09T04:56:28.842596849Z" level=info msg="StartContainer for \"6027e45d64ba29a5d0110226a0e7f047ecd6009fca7b75380e4dd96435d5b2d1\"" Sep 9 04:56:28.843644 containerd[1866]: time="2025-09-09T04:56:28.843616498Z" level=info msg="connecting to shim 6027e45d64ba29a5d0110226a0e7f047ecd6009fca7b75380e4dd96435d5b2d1" address="unix:///run/containerd/s/4594019ecbbadff266c8994305d5bf89d1cfb196a3283fcd6aeb3d6b3f00ef39" protocol=ttrpc version=3 Sep 9 04:56:28.864160 systemd[1]: Started cri-containerd-6027e45d64ba29a5d0110226a0e7f047ecd6009fca7b75380e4dd96435d5b2d1.scope - libcontainer container 6027e45d64ba29a5d0110226a0e7f047ecd6009fca7b75380e4dd96435d5b2d1. Sep 9 04:56:28.899702 containerd[1866]: time="2025-09-09T04:56:28.898847660Z" level=info msg="StartContainer for \"6027e45d64ba29a5d0110226a0e7f047ecd6009fca7b75380e4dd96435d5b2d1\" returns successfully" Sep 9 04:56:28.905236 systemd[1]: cri-containerd-6027e45d64ba29a5d0110226a0e7f047ecd6009fca7b75380e4dd96435d5b2d1.scope: Deactivated successfully. Sep 9 04:56:28.908743 containerd[1866]: time="2025-09-09T04:56:28.908708292Z" level=info msg="received exit event container_id:\"6027e45d64ba29a5d0110226a0e7f047ecd6009fca7b75380e4dd96435d5b2d1\" id:\"6027e45d64ba29a5d0110226a0e7f047ecd6009fca7b75380e4dd96435d5b2d1\" pid:4025 exited_at:{seconds:1757393788 nanos:907966220}" Sep 9 04:56:28.908999 containerd[1866]: time="2025-09-09T04:56:28.908839929Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6027e45d64ba29a5d0110226a0e7f047ecd6009fca7b75380e4dd96435d5b2d1\" id:\"6027e45d64ba29a5d0110226a0e7f047ecd6009fca7b75380e4dd96435d5b2d1\" pid:4025 exited_at:{seconds:1757393788 nanos:907966220}" Sep 9 04:56:28.926041 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6027e45d64ba29a5d0110226a0e7f047ecd6009fca7b75380e4dd96435d5b2d1-rootfs.mount: Deactivated successfully. Sep 9 04:56:29.639119 kubelet[3383]: E0909 04:56:29.639056 3383 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-blr4m" podUID="ef65cad1-f38f-4d74-aff6-26558cee563c" Sep 9 04:56:30.720001 containerd[1866]: time="2025-09-09T04:56:30.719750194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 04:56:31.639445 kubelet[3383]: E0909 04:56:31.639392 3383 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-blr4m" podUID="ef65cad1-f38f-4d74-aff6-26558cee563c" Sep 9 04:56:33.195210 containerd[1866]: time="2025-09-09T04:56:33.195158020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:33.198980 containerd[1866]: time="2025-09-09T04:56:33.198925189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 9 04:56:33.203021 containerd[1866]: time="2025-09-09T04:56:33.202960966Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:33.211564 containerd[1866]: time="2025-09-09T04:56:33.211520151Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:33.212155 containerd[1866]: time="2025-09-09T04:56:33.211827561Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.492041366s" Sep 9 04:56:33.212155 containerd[1866]: time="2025-09-09T04:56:33.211855546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 9 04:56:33.214628 containerd[1866]: time="2025-09-09T04:56:33.214584793Z" level=info msg="CreateContainer within sandbox \"a3cb1c2437fee0d6a7ac6e3215e919357c3cd0ed570cc544e7271ca1d02a55e9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 04:56:33.256811 containerd[1866]: time="2025-09-09T04:56:33.256769069Z" level=info msg="Container 7a23d52340dd673c5837e0f66e2107b4c47aa33add7523e63e03abc725e6382b: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:56:33.260501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3312426735.mount: Deactivated successfully. Sep 9 04:56:33.281435 containerd[1866]: time="2025-09-09T04:56:33.281311661Z" level=info msg="CreateContainer within sandbox \"a3cb1c2437fee0d6a7ac6e3215e919357c3cd0ed570cc544e7271ca1d02a55e9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7a23d52340dd673c5837e0f66e2107b4c47aa33add7523e63e03abc725e6382b\"" Sep 9 04:56:33.282077 containerd[1866]: time="2025-09-09T04:56:33.282015451Z" level=info msg="StartContainer for \"7a23d52340dd673c5837e0f66e2107b4c47aa33add7523e63e03abc725e6382b\"" Sep 9 04:56:33.283280 containerd[1866]: time="2025-09-09T04:56:33.283215402Z" level=info msg="connecting to shim 7a23d52340dd673c5837e0f66e2107b4c47aa33add7523e63e03abc725e6382b" address="unix:///run/containerd/s/4594019ecbbadff266c8994305d5bf89d1cfb196a3283fcd6aeb3d6b3f00ef39" protocol=ttrpc version=3 Sep 9 04:56:33.303133 systemd[1]: Started cri-containerd-7a23d52340dd673c5837e0f66e2107b4c47aa33add7523e63e03abc725e6382b.scope - libcontainer container 7a23d52340dd673c5837e0f66e2107b4c47aa33add7523e63e03abc725e6382b. Sep 9 04:56:33.343424 containerd[1866]: time="2025-09-09T04:56:33.343379604Z" level=info msg="StartContainer for \"7a23d52340dd673c5837e0f66e2107b4c47aa33add7523e63e03abc725e6382b\" returns successfully" Sep 9 04:56:33.639206 kubelet[3383]: E0909 04:56:33.639138 3383 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-blr4m" podUID="ef65cad1-f38f-4d74-aff6-26558cee563c" Sep 9 04:56:34.431580 containerd[1866]: time="2025-09-09T04:56:34.431466390Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 04:56:34.433724 systemd[1]: cri-containerd-7a23d52340dd673c5837e0f66e2107b4c47aa33add7523e63e03abc725e6382b.scope: Deactivated successfully. Sep 9 04:56:34.434006 systemd[1]: cri-containerd-7a23d52340dd673c5837e0f66e2107b4c47aa33add7523e63e03abc725e6382b.scope: Consumed 318ms CPU time, 186.1M memory peak, 165.8M written to disk. Sep 9 04:56:34.435698 containerd[1866]: time="2025-09-09T04:56:34.435659555Z" level=info msg="received exit event container_id:\"7a23d52340dd673c5837e0f66e2107b4c47aa33add7523e63e03abc725e6382b\" id:\"7a23d52340dd673c5837e0f66e2107b4c47aa33add7523e63e03abc725e6382b\" pid:4083 exited_at:{seconds:1757393794 nanos:435155683}" Sep 9 04:56:34.435872 containerd[1866]: time="2025-09-09T04:56:34.435808016Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a23d52340dd673c5837e0f66e2107b4c47aa33add7523e63e03abc725e6382b\" id:\"7a23d52340dd673c5837e0f66e2107b4c47aa33add7523e63e03abc725e6382b\" pid:4083 exited_at:{seconds:1757393794 nanos:435155683}" Sep 9 04:56:34.451294 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a23d52340dd673c5837e0f66e2107b4c47aa33add7523e63e03abc725e6382b-rootfs.mount: Deactivated successfully. Sep 9 04:56:34.455632 kubelet[3383]: I0909 04:56:34.455522 3383 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 04:56:34.500216 systemd[1]: Created slice kubepods-burstable-pod4e8b5849_a641_4cf1_9131_40b61016147b.slice - libcontainer container kubepods-burstable-pod4e8b5849_a641_4cf1_9131_40b61016147b.slice. Sep 9 04:56:34.515622 systemd[1]: Created slice kubepods-besteffort-podc93fd98f_5990_467a_982e_edb4ef4740be.slice - libcontainer container kubepods-besteffort-podc93fd98f_5990_467a_982e_edb4ef4740be.slice. Sep 9 04:56:34.844270 kubelet[3383]: I0909 04:56:34.513513 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xm9s\" (UniqueName: \"kubernetes.io/projected/4e8b5849-a641-4cf1-9131-40b61016147b-kube-api-access-9xm9s\") pod \"coredns-668d6bf9bc-5nxp2\" (UID: \"4e8b5849-a641-4cf1-9131-40b61016147b\") " pod="kube-system/coredns-668d6bf9bc-5nxp2" Sep 9 04:56:34.844270 kubelet[3383]: I0909 04:56:34.513834 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qxm2\" (UniqueName: \"kubernetes.io/projected/57dfc688-a4c4-4ed5-b76a-fb63bc00545e-kube-api-access-5qxm2\") pod \"calico-kube-controllers-54d5b6bfc9-zsk7h\" (UID: \"57dfc688-a4c4-4ed5-b76a-fb63bc00545e\") " pod="calico-system/calico-kube-controllers-54d5b6bfc9-zsk7h" Sep 9 04:56:34.844270 kubelet[3383]: I0909 04:56:34.514471 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57dfc688-a4c4-4ed5-b76a-fb63bc00545e-tigera-ca-bundle\") pod \"calico-kube-controllers-54d5b6bfc9-zsk7h\" (UID: \"57dfc688-a4c4-4ed5-b76a-fb63bc00545e\") " pod="calico-system/calico-kube-controllers-54d5b6bfc9-zsk7h" Sep 9 04:56:34.844270 kubelet[3383]: I0909 04:56:34.514885 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt8b6\" (UniqueName: \"kubernetes.io/projected/1ee9be11-f0c2-4be0-9b26-0b300acc210d-kube-api-access-wt8b6\") pod \"calico-apiserver-646f77f6b4-tb4sw\" (UID: \"1ee9be11-f0c2-4be0-9b26-0b300acc210d\") " pod="calico-apiserver/calico-apiserver-646f77f6b4-tb4sw" Sep 9 04:56:34.844270 kubelet[3383]: I0909 04:56:34.514985 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e8b5849-a641-4cf1-9131-40b61016147b-config-volume\") pod \"coredns-668d6bf9bc-5nxp2\" (UID: \"4e8b5849-a641-4cf1-9131-40b61016147b\") " pod="kube-system/coredns-668d6bf9bc-5nxp2" Sep 9 04:56:34.525877 systemd[1]: Created slice kubepods-besteffort-pod57dfc688_a4c4_4ed5_b76a_fb63bc00545e.slice - libcontainer container kubepods-besteffort-pod57dfc688_a4c4_4ed5_b76a_fb63bc00545e.slice. Sep 9 04:56:34.844660 kubelet[3383]: I0909 04:56:34.515076 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c93fd98f-5990-467a-982e-edb4ef4740be-whisker-backend-key-pair\") pod \"whisker-797f6cff5c-mxkc5\" (UID: \"c93fd98f-5990-467a-982e-edb4ef4740be\") " pod="calico-system/whisker-797f6cff5c-mxkc5" Sep 9 04:56:34.844660 kubelet[3383]: I0909 04:56:34.515696 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ppd2\" (UniqueName: \"kubernetes.io/projected/c93fd98f-5990-467a-982e-edb4ef4740be-kube-api-access-8ppd2\") pod \"whisker-797f6cff5c-mxkc5\" (UID: \"c93fd98f-5990-467a-982e-edb4ef4740be\") " pod="calico-system/whisker-797f6cff5c-mxkc5" Sep 9 04:56:34.844660 kubelet[3383]: I0909 04:56:34.516331 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1ee9be11-f0c2-4be0-9b26-0b300acc210d-calico-apiserver-certs\") pod \"calico-apiserver-646f77f6b4-tb4sw\" (UID: \"1ee9be11-f0c2-4be0-9b26-0b300acc210d\") " pod="calico-apiserver/calico-apiserver-646f77f6b4-tb4sw" Sep 9 04:56:34.844660 kubelet[3383]: I0909 04:56:34.516708 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c93fd98f-5990-467a-982e-edb4ef4740be-whisker-ca-bundle\") pod \"whisker-797f6cff5c-mxkc5\" (UID: \"c93fd98f-5990-467a-982e-edb4ef4740be\") " pod="calico-system/whisker-797f6cff5c-mxkc5" Sep 9 04:56:34.844660 kubelet[3383]: I0909 04:56:34.617028 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njtv9\" (UniqueName: \"kubernetes.io/projected/c59e2b93-fd42-4540-8fc2-a0a9c55c3504-kube-api-access-njtv9\") pod \"goldmane-54d579b49d-kfzzp\" (UID: \"c59e2b93-fd42-4540-8fc2-a0a9c55c3504\") " pod="calico-system/goldmane-54d579b49d-kfzzp" Sep 9 04:56:34.532981 systemd[1]: Created slice kubepods-besteffort-pod1ee9be11_f0c2_4be0_9b26_0b300acc210d.slice - libcontainer container kubepods-besteffort-pod1ee9be11_f0c2_4be0_9b26_0b300acc210d.slice. Sep 9 04:56:34.844778 kubelet[3383]: I0909 04:56:34.617065 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cfb38fab-c883-40c5-a29d-aad239ca3e1a-config-volume\") pod \"coredns-668d6bf9bc-h2544\" (UID: \"cfb38fab-c883-40c5-a29d-aad239ca3e1a\") " pod="kube-system/coredns-668d6bf9bc-h2544" Sep 9 04:56:34.844778 kubelet[3383]: I0909 04:56:34.617078 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml7rf\" (UniqueName: \"kubernetes.io/projected/d841bb99-115e-474c-93ff-3dec18b6a9a8-kube-api-access-ml7rf\") pod \"calico-apiserver-646f77f6b4-ml9lp\" (UID: \"d841bb99-115e-474c-93ff-3dec18b6a9a8\") " pod="calico-apiserver/calico-apiserver-646f77f6b4-ml9lp" Sep 9 04:56:34.844778 kubelet[3383]: I0909 04:56:34.617107 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4565n\" (UniqueName: \"kubernetes.io/projected/cfb38fab-c883-40c5-a29d-aad239ca3e1a-kube-api-access-4565n\") pod \"coredns-668d6bf9bc-h2544\" (UID: \"cfb38fab-c883-40c5-a29d-aad239ca3e1a\") " pod="kube-system/coredns-668d6bf9bc-h2544" Sep 9 04:56:34.844778 kubelet[3383]: I0909 04:56:34.617137 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c59e2b93-fd42-4540-8fc2-a0a9c55c3504-goldmane-key-pair\") pod \"goldmane-54d579b49d-kfzzp\" (UID: \"c59e2b93-fd42-4540-8fc2-a0a9c55c3504\") " pod="calico-system/goldmane-54d579b49d-kfzzp" Sep 9 04:56:34.844778 kubelet[3383]: I0909 04:56:34.617148 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c59e2b93-fd42-4540-8fc2-a0a9c55c3504-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-kfzzp\" (UID: \"c59e2b93-fd42-4540-8fc2-a0a9c55c3504\") " pod="calico-system/goldmane-54d579b49d-kfzzp" Sep 9 04:56:34.538071 systemd[1]: Created slice kubepods-besteffort-podc59e2b93_fd42_4540_8fc2_a0a9c55c3504.slice - libcontainer container kubepods-besteffort-podc59e2b93_fd42_4540_8fc2_a0a9c55c3504.slice. Sep 9 04:56:34.844880 kubelet[3383]: I0909 04:56:34.617159 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d841bb99-115e-474c-93ff-3dec18b6a9a8-calico-apiserver-certs\") pod \"calico-apiserver-646f77f6b4-ml9lp\" (UID: \"d841bb99-115e-474c-93ff-3dec18b6a9a8\") " pod="calico-apiserver/calico-apiserver-646f77f6b4-ml9lp" Sep 9 04:56:34.844880 kubelet[3383]: I0909 04:56:34.617171 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c59e2b93-fd42-4540-8fc2-a0a9c55c3504-config\") pod \"goldmane-54d579b49d-kfzzp\" (UID: \"c59e2b93-fd42-4540-8fc2-a0a9c55c3504\") " pod="calico-system/goldmane-54d579b49d-kfzzp" Sep 9 04:56:34.543280 systemd[1]: Created slice kubepods-burstable-podcfb38fab_c883_40c5_a29d_aad239ca3e1a.slice - libcontainer container kubepods-burstable-podcfb38fab_c883_40c5_a29d_aad239ca3e1a.slice. Sep 9 04:56:34.547511 systemd[1]: Created slice kubepods-besteffort-podd841bb99_115e_474c_93ff_3dec18b6a9a8.slice - libcontainer container kubepods-besteffort-podd841bb99_115e_474c_93ff_3dec18b6a9a8.slice. Sep 9 04:56:35.149077 containerd[1866]: time="2025-09-09T04:56:35.148885757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h2544,Uid:cfb38fab-c883-40c5-a29d-aad239ca3e1a,Namespace:kube-system,Attempt:0,}" Sep 9 04:56:35.149455 containerd[1866]: time="2025-09-09T04:56:35.149432694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-kfzzp,Uid:c59e2b93-fd42-4540-8fc2-a0a9c55c3504,Namespace:calico-system,Attempt:0,}" Sep 9 04:56:35.149566 containerd[1866]: time="2025-09-09T04:56:35.149549122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-646f77f6b4-tb4sw,Uid:1ee9be11-f0c2-4be0-9b26-0b300acc210d,Namespace:calico-apiserver,Attempt:0,}" Sep 9 04:56:35.149644 containerd[1866]: time="2025-09-09T04:56:35.149628165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5nxp2,Uid:4e8b5849-a641-4cf1-9131-40b61016147b,Namespace:kube-system,Attempt:0,}" Sep 9 04:56:35.155064 containerd[1866]: time="2025-09-09T04:56:35.155040818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-646f77f6b4-ml9lp,Uid:d841bb99-115e-474c-93ff-3dec18b6a9a8,Namespace:calico-apiserver,Attempt:0,}" Sep 9 04:56:35.166795 containerd[1866]: time="2025-09-09T04:56:35.166743815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-797f6cff5c-mxkc5,Uid:c93fd98f-5990-467a-982e-edb4ef4740be,Namespace:calico-system,Attempt:0,}" Sep 9 04:56:35.166941 containerd[1866]: time="2025-09-09T04:56:35.166759688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54d5b6bfc9-zsk7h,Uid:57dfc688-a4c4-4ed5-b76a-fb63bc00545e,Namespace:calico-system,Attempt:0,}" Sep 9 04:56:35.444986 containerd[1866]: time="2025-09-09T04:56:35.444804659Z" level=error msg="Failed to destroy network for sandbox \"3ee29ea93f5efe241a829b7529d181ac767bb44d13751c77c4300fa20e11b118\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:56:35.458912 containerd[1866]: time="2025-09-09T04:56:35.458031393Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h2544,Uid:cfb38fab-c883-40c5-a29d-aad239ca3e1a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ee29ea93f5efe241a829b7529d181ac767bb44d13751c77c4300fa20e11b118\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:56:35.459130 kubelet[3383]: E0909 04:56:35.458864 3383 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ee29ea93f5efe241a829b7529d181ac767bb44d13751c77c4300fa20e11b118\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:56:35.459130 kubelet[3383]: E0909 04:56:35.458940 3383 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ee29ea93f5efe241a829b7529d181ac767bb44d13751c77c4300fa20e11b118\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-h2544" Sep 9 04:56:35.459130 kubelet[3383]: E0909 04:56:35.459017 3383 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ee29ea93f5efe241a829b7529d181ac767bb44d13751c77c4300fa20e11b118\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-h2544" Sep 9 04:56:35.460222 kubelet[3383]: E0909 04:56:35.459116 3383 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-h2544_kube-system(cfb38fab-c883-40c5-a29d-aad239ca3e1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-h2544_kube-system(cfb38fab-c883-40c5-a29d-aad239ca3e1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ee29ea93f5efe241a829b7529d181ac767bb44d13751c77c4300fa20e11b118\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-h2544" podUID="cfb38fab-c883-40c5-a29d-aad239ca3e1a" Sep 9 04:56:35.466202 containerd[1866]: time="2025-09-09T04:56:35.466169885Z" level=error msg="Failed to destroy network for sandbox \"d60eace5ad646a7e2de981ba59d27ef896d82107c6eb945323f18b675a8bc8b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:56:35.467617 systemd[1]: run-netns-cni\x2dee4af45b\x2d11ae\x2d8eb1\x2d6bcc\x2d7d7caf77374b.mount: Deactivated successfully. Sep 9 04:56:35.474904 containerd[1866]: time="2025-09-09T04:56:35.474800225Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-kfzzp,Uid:c59e2b93-fd42-4540-8fc2-a0a9c55c3504,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d60eace5ad646a7e2de981ba59d27ef896d82107c6eb945323f18b675a8bc8b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:56:35.475440 kubelet[3383]: E0909 04:56:35.475233 3383 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d60eace5ad646a7e2de981ba59d27ef896d82107c6eb945323f18b675a8bc8b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:56:35.475784 kubelet[3383]: E0909 04:56:35.475532 3383 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d60eace5ad646a7e2de981ba59d27ef896d82107c6eb945323f18b675a8bc8b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-kfzzp" Sep 9 04:56:35.475784 kubelet[3383]: E0909 04:56:35.475553 3383 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d60eace5ad646a7e2de981ba59d27ef896d82107c6eb945323f18b675a8bc8b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-kfzzp" Sep 9 04:56:35.475784 kubelet[3383]: E0909 04:56:35.475597 3383 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-kfzzp_calico-system(c59e2b93-fd42-4540-8fc2-a0a9c55c3504)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-kfzzp_calico-system(c59e2b93-fd42-4540-8fc2-a0a9c55c3504)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d60eace5ad646a7e2de981ba59d27ef896d82107c6eb945323f18b675a8bc8b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-kfzzp" podUID="c59e2b93-fd42-4540-8fc2-a0a9c55c3504" Sep 9 04:56:35.503185 containerd[1866]: time="2025-09-09T04:56:35.503141810Z" level=error msg="Failed to destroy network for sandbox \"ee615e28a7e358ab3e4b527f4efb0a0a6a76a008d6ced82c7fd2a1f27c3d91c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:56:35.504722 systemd[1]: run-netns-cni\x2db904a22c\x2dba5b\x2d60a3\x2d3ed8\x2d5a9ca70c8952.mount: Deactivated successfully. Sep 9 04:56:35.509552 containerd[1866]: time="2025-09-09T04:56:35.509509078Z" level=error msg="Failed to destroy network for sandbox \"9148629b09b0fb12f08f93e25da4b835cd7df8c6b39c25e2c088b282aa34959b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:56:35.510185 containerd[1866]: time="2025-09-09T04:56:35.510068496Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-646f77f6b4-tb4sw,Uid:1ee9be11-f0c2-4be0-9b26-0b300acc210d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee615e28a7e358ab3e4b527f4efb0a0a6a76a008d6ced82c7fd2a1f27c3d91c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:56:35.510628 kubelet[3383]: E0909 04:56:35.510376 3383 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee615e28a7e358ab3e4b527f4efb0a0a6a76a008d6ced82c7fd2a1f27c3d91c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:56:35.511490 systemd[1]: run-netns-cni\x2da6dee2a2\x2d0c22\x2debad\x2de5ba\x2d3d2d47aa1438.mount: Deactivated successfully. Sep 9 04:56:35.512067 kubelet[3383]: E0909 04:56:35.511557 3383 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee615e28a7e358ab3e4b527f4efb0a0a6a76a008d6ced82c7fd2a1f27c3d91c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-646f77f6b4-tb4sw" Sep 9 04:56:35.512067 kubelet[3383]: E0909 04:56:35.511583 3383 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee615e28a7e358ab3e4b527f4efb0a0a6a76a008d6ced82c7fd2a1f27c3d91c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-646f77f6b4-tb4sw" Sep 9 04:56:35.512067 kubelet[3383]: E0909 04:56:35.511626 3383 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-646f77f6b4-tb4sw_calico-apiserver(1ee9be11-f0c2-4be0-9b26-0b300acc210d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-646f77f6b4-tb4sw_calico-apiserver(1ee9be11-f0c2-4be0-9b26-0b300acc210d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee615e28a7e358ab3e4b527f4efb0a0a6a76a008d6ced82c7fd2a1f27c3d91c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-646f77f6b4-tb4sw" podUID="1ee9be11-f0c2-4be0-9b26-0b300acc210d" Sep 9 04:56:35.518099 containerd[1866]: time="2025-09-09T04:56:35.518050959Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5nxp2,Uid:4e8b5849-a641-4cf1-9131-40b61016147b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9148629b09b0fb12f08f93e25da4b835cd7df8c6b39c25e2c088b282aa34959b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:56:35.518340 kubelet[3383]: E0909 04:56:35.518306 3383 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9148629b09b0fb12f08f93e25da4b835cd7df8c6b39c25e2c088b282aa34959b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:56:35.518447 kubelet[3383]: E0909 04:56:35.518354 3383 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9148629b09b0fb12f08f93e25da4b835cd7df8c6b39c25e2c088b282aa34959b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5nxp2" Sep 9 04:56:35.518447 kubelet[3383]: E0909 04:56:35.518369 3383 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9148629b09b0fb12f08f93e25da4b835cd7df8c6b39c25e2c088b282aa34959b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5nxp2" Sep 9 04:56:35.518447 kubelet[3383]: E0909 04:56:35.518407 3383 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-5nxp2_kube-system(4e8b5849-a641-4cf1-9131-40b61016147b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-5nxp2_kube-system(4e8b5849-a641-4cf1-9131-40b61016147b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9148629b09b0fb12f08f93e25da4b835cd7df8c6b39c25e2c088b282aa34959b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5nxp2" podUID="4e8b5849-a641-4cf1-9131-40b61016147b" Sep 9 04:56:35.531011 containerd[1866]: time="2025-09-09T04:56:35.530942211Z" level=error msg="Failed to destroy network for sandbox \"17eed5c98d9a75337bf387f9c1e56bcdf7764881a81583b614cad03719da20be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:56:35.532920 systemd[1]: run-netns-cni\x2d6c2f22b0\x2d4a7f\x2d2ffd\x2dfff6\x2d1e67fce10638.mount: Deactivated successfully. Sep 9 04:56:35.533255 containerd[1866]: time="2025-09-09T04:56:35.533230972Z" level=error msg="Failed to destroy network for sandbox \"68254f26d5564b3e31d1938caef2ed81e98a594436aabc9a50f09bc93d709bf5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:56:35.534165 containerd[1866]: time="2025-09-09T04:56:35.534131280Z" level=error msg="Failed to destroy network for sandbox \"e2587640ff8bd1c7d43a5e51087d6e9cdcdb1d573181da16be9ac04b96463d1f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:56:35.535633 containerd[1866]: time="2025-09-09T04:56:35.535599767Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-646f77f6b4-ml9lp,Uid:d841bb99-115e-474c-93ff-3dec18b6a9a8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"17eed5c98d9a75337bf387f9c1e56bcdf7764881a81583b614cad03719da20be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:56:35.535964 kubelet[3383]: E0909 04:56:35.535920 3383 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17eed5c98d9a75337bf387f9c1e56bcdf7764881a81583b614cad03719da20be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:56:35.536351 kubelet[3383]: E0909 04:56:35.536019 3383 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17eed5c98d9a75337bf387f9c1e56bcdf7764881a81583b614cad03719da20be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-646f77f6b4-ml9lp" Sep 9 04:56:35.536351 kubelet[3383]: E0909 04:56:35.536042 3383 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17eed5c98d9a75337bf387f9c1e56bcdf7764881a81583b614cad03719da20be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-646f77f6b4-ml9lp" Sep 9 04:56:35.536351 kubelet[3383]: E0909 04:56:35.536086 3383 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-646f77f6b4-ml9lp_calico-apiserver(d841bb99-115e-474c-93ff-3dec18b6a9a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-646f77f6b4-ml9lp_calico-apiserver(d841bb99-115e-474c-93ff-3dec18b6a9a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17eed5c98d9a75337bf387f9c1e56bcdf7764881a81583b614cad03719da20be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-646f77f6b4-ml9lp" podUID="d841bb99-115e-474c-93ff-3dec18b6a9a8" Sep 9 04:56:35.539378 containerd[1866]: time="2025-09-09T04:56:35.539345167Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-797f6cff5c-mxkc5,Uid:c93fd98f-5990-467a-982e-edb4ef4740be,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"68254f26d5564b3e31d1938caef2ed81e98a594436aabc9a50f09bc93d709bf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:56:35.539701 kubelet[3383]: E0909 04:56:35.539598 3383 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68254f26d5564b3e31d1938caef2ed81e98a594436aabc9a50f09bc93d709bf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:56:35.539701 kubelet[3383]: E0909 04:56:35.539655 3383 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68254f26d5564b3e31d1938caef2ed81e98a594436aabc9a50f09bc93d709bf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-797f6cff5c-mxkc5" Sep 9 04:56:35.539701 kubelet[3383]: E0909 04:56:35.539668 3383 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68254f26d5564b3e31d1938caef2ed81e98a594436aabc9a50f09bc93d709bf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-797f6cff5c-mxkc5" Sep 9 04:56:35.539836 kubelet[3383]: E0909 04:56:35.539815 3383 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-797f6cff5c-mxkc5_calico-system(c93fd98f-5990-467a-982e-edb4ef4740be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-797f6cff5c-mxkc5_calico-system(c93fd98f-5990-467a-982e-edb4ef4740be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68254f26d5564b3e31d1938caef2ed81e98a594436aabc9a50f09bc93d709bf5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-797f6cff5c-mxkc5" podUID="c93fd98f-5990-467a-982e-edb4ef4740be" Sep 9 04:56:35.542957 containerd[1866]: time="2025-09-09T04:56:35.542848303Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54d5b6bfc9-zsk7h,Uid:57dfc688-a4c4-4ed5-b76a-fb63bc00545e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2587640ff8bd1c7d43a5e51087d6e9cdcdb1d573181da16be9ac04b96463d1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:56:35.543381 kubelet[3383]: E0909 04:56:35.543340 3383 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2587640ff8bd1c7d43a5e51087d6e9cdcdb1d573181da16be9ac04b96463d1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:56:35.543516 kubelet[3383]: E0909 04:56:35.543475 3383 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2587640ff8bd1c7d43a5e51087d6e9cdcdb1d573181da16be9ac04b96463d1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54d5b6bfc9-zsk7h" Sep 9 04:56:35.543516 kubelet[3383]: E0909 04:56:35.543494 3383 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2587640ff8bd1c7d43a5e51087d6e9cdcdb1d573181da16be9ac04b96463d1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54d5b6bfc9-zsk7h" Sep 9 04:56:35.543625 kubelet[3383]: E0909 04:56:35.543603 3383 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54d5b6bfc9-zsk7h_calico-system(57dfc688-a4c4-4ed5-b76a-fb63bc00545e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54d5b6bfc9-zsk7h_calico-system(57dfc688-a4c4-4ed5-b76a-fb63bc00545e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2587640ff8bd1c7d43a5e51087d6e9cdcdb1d573181da16be9ac04b96463d1f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54d5b6bfc9-zsk7h" podUID="57dfc688-a4c4-4ed5-b76a-fb63bc00545e" Sep 9 04:56:35.643562 systemd[1]: Created slice kubepods-besteffort-podef65cad1_f38f_4d74_aff6_26558cee563c.slice - libcontainer container kubepods-besteffort-podef65cad1_f38f_4d74_aff6_26558cee563c.slice. Sep 9 04:56:35.645839 containerd[1866]: time="2025-09-09T04:56:35.645802432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-blr4m,Uid:ef65cad1-f38f-4d74-aff6-26558cee563c,Namespace:calico-system,Attempt:0,}" Sep 9 04:56:35.691873 containerd[1866]: time="2025-09-09T04:56:35.691771389Z" level=error msg="Failed to destroy network for sandbox \"fc3a950c75b7418580e5607f17802289c63dcef9d7a32f9df265bffad43feed4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:56:35.696088 containerd[1866]: time="2025-09-09T04:56:35.695960506Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-blr4m,Uid:ef65cad1-f38f-4d74-aff6-26558cee563c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc3a950c75b7418580e5607f17802289c63dcef9d7a32f9df265bffad43feed4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:56:35.696713 kubelet[3383]: E0909 04:56:35.696355 3383 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc3a950c75b7418580e5607f17802289c63dcef9d7a32f9df265bffad43feed4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 04:56:35.696713 kubelet[3383]: E0909 04:56:35.696409 3383 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc3a950c75b7418580e5607f17802289c63dcef9d7a32f9df265bffad43feed4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-blr4m" Sep 9 04:56:35.696713 kubelet[3383]: E0909 04:56:35.696429 3383 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc3a950c75b7418580e5607f17802289c63dcef9d7a32f9df265bffad43feed4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-blr4m" Sep 9 04:56:35.696892 kubelet[3383]: E0909 04:56:35.696460 3383 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-blr4m_calico-system(ef65cad1-f38f-4d74-aff6-26558cee563c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-blr4m_calico-system(ef65cad1-f38f-4d74-aff6-26558cee563c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc3a950c75b7418580e5607f17802289c63dcef9d7a32f9df265bffad43feed4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-blr4m" podUID="ef65cad1-f38f-4d74-aff6-26558cee563c" Sep 9 04:56:35.731713 containerd[1866]: time="2025-09-09T04:56:35.731513178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 04:56:36.451695 systemd[1]: run-netns-cni\x2dbf001106\x2d759a\x2d77a5\x2deea7\x2d990deb26dc8e.mount: Deactivated successfully. Sep 9 04:56:36.451780 systemd[1]: run-netns-cni\x2dd7b66cb0\x2d2b15\x2d48b8\x2da595\x2dded3d131fb47.mount: Deactivated successfully. Sep 9 04:56:39.396672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount963563891.mount: Deactivated successfully. Sep 9 04:56:39.708111 containerd[1866]: time="2025-09-09T04:56:39.707736583Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:39.711020 containerd[1866]: time="2025-09-09T04:56:39.710876503Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 9 04:56:39.714783 containerd[1866]: time="2025-09-09T04:56:39.714734516Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:39.720294 containerd[1866]: time="2025-09-09T04:56:39.720248831Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:39.720703 containerd[1866]: time="2025-09-09T04:56:39.720598249Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 3.989058414s" Sep 9 04:56:39.720703 containerd[1866]: time="2025-09-09T04:56:39.720627210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 9 04:56:39.733911 containerd[1866]: time="2025-09-09T04:56:39.733884144Z" level=info msg="CreateContainer within sandbox \"a3cb1c2437fee0d6a7ac6e3215e919357c3cd0ed570cc544e7271ca1d02a55e9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 04:56:39.764911 containerd[1866]: time="2025-09-09T04:56:39.764803894Z" level=info msg="Container 2fef469f821f5a8a3c89f16be6a251680bf2e8dee3b39f9b54d9592cad224321: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:56:39.786829 containerd[1866]: time="2025-09-09T04:56:39.786784697Z" level=info msg="CreateContainer within sandbox \"a3cb1c2437fee0d6a7ac6e3215e919357c3cd0ed570cc544e7271ca1d02a55e9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2fef469f821f5a8a3c89f16be6a251680bf2e8dee3b39f9b54d9592cad224321\"" Sep 9 04:56:39.787515 containerd[1866]: time="2025-09-09T04:56:39.787488005Z" level=info msg="StartContainer for \"2fef469f821f5a8a3c89f16be6a251680bf2e8dee3b39f9b54d9592cad224321\"" Sep 9 04:56:39.788767 containerd[1866]: time="2025-09-09T04:56:39.788742848Z" level=info msg="connecting to shim 2fef469f821f5a8a3c89f16be6a251680bf2e8dee3b39f9b54d9592cad224321" address="unix:///run/containerd/s/4594019ecbbadff266c8994305d5bf89d1cfb196a3283fcd6aeb3d6b3f00ef39" protocol=ttrpc version=3 Sep 9 04:56:39.808174 systemd[1]: Started cri-containerd-2fef469f821f5a8a3c89f16be6a251680bf2e8dee3b39f9b54d9592cad224321.scope - libcontainer container 2fef469f821f5a8a3c89f16be6a251680bf2e8dee3b39f9b54d9592cad224321. Sep 9 04:56:39.853719 containerd[1866]: time="2025-09-09T04:56:39.853681181Z" level=info msg="StartContainer for \"2fef469f821f5a8a3c89f16be6a251680bf2e8dee3b39f9b54d9592cad224321\" returns successfully" Sep 9 04:56:40.282978 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 04:56:40.283126 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 04:56:40.454898 kubelet[3383]: I0909 04:56:40.454344 3383 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c93fd98f-5990-467a-982e-edb4ef4740be-whisker-backend-key-pair\") pod \"c93fd98f-5990-467a-982e-edb4ef4740be\" (UID: \"c93fd98f-5990-467a-982e-edb4ef4740be\") " Sep 9 04:56:40.454898 kubelet[3383]: I0909 04:56:40.454399 3383 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c93fd98f-5990-467a-982e-edb4ef4740be-whisker-ca-bundle\") pod \"c93fd98f-5990-467a-982e-edb4ef4740be\" (UID: \"c93fd98f-5990-467a-982e-edb4ef4740be\") " Sep 9 04:56:40.454898 kubelet[3383]: I0909 04:56:40.454418 3383 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ppd2\" (UniqueName: \"kubernetes.io/projected/c93fd98f-5990-467a-982e-edb4ef4740be-kube-api-access-8ppd2\") pod \"c93fd98f-5990-467a-982e-edb4ef4740be\" (UID: \"c93fd98f-5990-467a-982e-edb4ef4740be\") " Sep 9 04:56:40.456841 kubelet[3383]: I0909 04:56:40.456511 3383 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c93fd98f-5990-467a-982e-edb4ef4740be-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c93fd98f-5990-467a-982e-edb4ef4740be" (UID: "c93fd98f-5990-467a-982e-edb4ef4740be"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 04:56:40.460068 systemd[1]: var-lib-kubelet-pods-c93fd98f\x2d5990\x2d467a\x2d982e\x2dedb4ef4740be-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 9 04:56:40.463466 systemd[1]: var-lib-kubelet-pods-c93fd98f\x2d5990\x2d467a\x2d982e\x2dedb4ef4740be-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8ppd2.mount: Deactivated successfully. Sep 9 04:56:40.464346 kubelet[3383]: I0909 04:56:40.464314 3383 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c93fd98f-5990-467a-982e-edb4ef4740be-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c93fd98f-5990-467a-982e-edb4ef4740be" (UID: "c93fd98f-5990-467a-982e-edb4ef4740be"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 04:56:40.464552 kubelet[3383]: I0909 04:56:40.464461 3383 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c93fd98f-5990-467a-982e-edb4ef4740be-kube-api-access-8ppd2" (OuterVolumeSpecName: "kube-api-access-8ppd2") pod "c93fd98f-5990-467a-982e-edb4ef4740be" (UID: "c93fd98f-5990-467a-982e-edb4ef4740be"). InnerVolumeSpecName "kube-api-access-8ppd2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 04:56:40.555372 kubelet[3383]: I0909 04:56:40.555242 3383 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c93fd98f-5990-467a-982e-edb4ef4740be-whisker-ca-bundle\") on node \"ci-4452.0.0-n-087888047c\" DevicePath \"\"" Sep 9 04:56:40.555644 kubelet[3383]: I0909 04:56:40.555605 3383 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8ppd2\" (UniqueName: \"kubernetes.io/projected/c93fd98f-5990-467a-982e-edb4ef4740be-kube-api-access-8ppd2\") on node \"ci-4452.0.0-n-087888047c\" DevicePath \"\"" Sep 9 04:56:40.555644 kubelet[3383]: I0909 04:56:40.555621 3383 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c93fd98f-5990-467a-982e-edb4ef4740be-whisker-backend-key-pair\") on node \"ci-4452.0.0-n-087888047c\" DevicePath \"\"" Sep 9 04:56:40.645162 systemd[1]: Removed slice kubepods-besteffort-podc93fd98f_5990_467a_982e_edb4ef4740be.slice - libcontainer container kubepods-besteffort-podc93fd98f_5990_467a_982e_edb4ef4740be.slice. Sep 9 04:56:40.768475 kubelet[3383]: I0909 04:56:40.768362 3383 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8pt4f" podStartSLOduration=1.959569047 podStartE2EDuration="15.768346885s" podCreationTimestamp="2025-09-09 04:56:25 +0000 UTC" firstStartedPulling="2025-09-09 04:56:25.912486894 +0000 UTC m=+19.419437843" lastFinishedPulling="2025-09-09 04:56:39.721264732 +0000 UTC m=+33.228215681" observedRunningTime="2025-09-09 04:56:40.764828882 +0000 UTC m=+34.271779823" watchObservedRunningTime="2025-09-09 04:56:40.768346885 +0000 UTC m=+34.275297850" Sep 9 04:56:40.830960 systemd[1]: Created slice kubepods-besteffort-pod955b91f3_f1ea_4c71_87e0_f03e60d99868.slice - libcontainer container kubepods-besteffort-pod955b91f3_f1ea_4c71_87e0_f03e60d99868.slice. Sep 9 04:56:40.852258 containerd[1866]: time="2025-09-09T04:56:40.852225119Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2fef469f821f5a8a3c89f16be6a251680bf2e8dee3b39f9b54d9592cad224321\" id:\"fce48e9a91053abd239e9c5e11377ea828073dc93de506aef2602b2fffbbf420\" pid:4435 exit_status:1 exited_at:{seconds:1757393800 nanos:851008973}" Sep 9 04:56:40.858632 kubelet[3383]: I0909 04:56:40.858558 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/955b91f3-f1ea-4c71-87e0-f03e60d99868-whisker-ca-bundle\") pod \"whisker-78fdc4b84-h24zs\" (UID: \"955b91f3-f1ea-4c71-87e0-f03e60d99868\") " pod="calico-system/whisker-78fdc4b84-h24zs" Sep 9 04:56:40.858828 kubelet[3383]: I0909 04:56:40.858760 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc6mz\" (UniqueName: \"kubernetes.io/projected/955b91f3-f1ea-4c71-87e0-f03e60d99868-kube-api-access-hc6mz\") pod \"whisker-78fdc4b84-h24zs\" (UID: \"955b91f3-f1ea-4c71-87e0-f03e60d99868\") " pod="calico-system/whisker-78fdc4b84-h24zs" Sep 9 04:56:40.858828 kubelet[3383]: I0909 04:56:40.858808 3383 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/955b91f3-f1ea-4c71-87e0-f03e60d99868-whisker-backend-key-pair\") pod \"whisker-78fdc4b84-h24zs\" (UID: \"955b91f3-f1ea-4c71-87e0-f03e60d99868\") " pod="calico-system/whisker-78fdc4b84-h24zs" Sep 9 04:56:41.134733 containerd[1866]: time="2025-09-09T04:56:41.134696019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78fdc4b84-h24zs,Uid:955b91f3-f1ea-4c71-87e0-f03e60d99868,Namespace:calico-system,Attempt:0,}" Sep 9 04:56:41.255151 systemd-networkd[1687]: caliad5f396892a: Link UP Sep 9 04:56:41.255841 systemd-networkd[1687]: caliad5f396892a: Gained carrier Sep 9 04:56:41.274756 containerd[1866]: 2025-09-09 04:56:41.156 [INFO][4449] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 04:56:41.274756 containerd[1866]: 2025-09-09 04:56:41.178 [INFO][4449] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452.0.0--n--087888047c-k8s-whisker--78fdc4b84--h24zs-eth0 whisker-78fdc4b84- calico-system 955b91f3-f1ea-4c71-87e0-f03e60d99868 851 0 2025-09-09 04:56:40 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:78fdc4b84 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4452.0.0-n-087888047c whisker-78fdc4b84-h24zs eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliad5f396892a [] [] }} ContainerID="6b76204070a7ba2428e5997faaf86a95d3c1eb4a1b8c657252654af90cac7073" Namespace="calico-system" Pod="whisker-78fdc4b84-h24zs" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-whisker--78fdc4b84--h24zs-" Sep 9 04:56:41.274756 containerd[1866]: 2025-09-09 04:56:41.178 [INFO][4449] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6b76204070a7ba2428e5997faaf86a95d3c1eb4a1b8c657252654af90cac7073" Namespace="calico-system" Pod="whisker-78fdc4b84-h24zs" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-whisker--78fdc4b84--h24zs-eth0" Sep 9 04:56:41.274756 containerd[1866]: 2025-09-09 04:56:41.197 [INFO][4461] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b76204070a7ba2428e5997faaf86a95d3c1eb4a1b8c657252654af90cac7073" HandleID="k8s-pod-network.6b76204070a7ba2428e5997faaf86a95d3c1eb4a1b8c657252654af90cac7073" Workload="ci--4452.0.0--n--087888047c-k8s-whisker--78fdc4b84--h24zs-eth0" Sep 9 04:56:41.274917 containerd[1866]: 2025-09-09 04:56:41.197 [INFO][4461] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6b76204070a7ba2428e5997faaf86a95d3c1eb4a1b8c657252654af90cac7073" HandleID="k8s-pod-network.6b76204070a7ba2428e5997faaf86a95d3c1eb4a1b8c657252654af90cac7073" Workload="ci--4452.0.0--n--087888047c-k8s-whisker--78fdc4b84--h24zs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b210), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4452.0.0-n-087888047c", "pod":"whisker-78fdc4b84-h24zs", "timestamp":"2025-09-09 04:56:41.197403481 +0000 UTC"}, Hostname:"ci-4452.0.0-n-087888047c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:56:41.274917 containerd[1866]: 2025-09-09 04:56:41.197 [INFO][4461] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:56:41.274917 containerd[1866]: 2025-09-09 04:56:41.197 [INFO][4461] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:56:41.274917 containerd[1866]: 2025-09-09 04:56:41.197 [INFO][4461] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452.0.0-n-087888047c' Sep 9 04:56:41.274917 containerd[1866]: 2025-09-09 04:56:41.203 [INFO][4461] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6b76204070a7ba2428e5997faaf86a95d3c1eb4a1b8c657252654af90cac7073" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:41.274917 containerd[1866]: 2025-09-09 04:56:41.207 [INFO][4461] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452.0.0-n-087888047c" Sep 9 04:56:41.274917 containerd[1866]: 2025-09-09 04:56:41.211 [INFO][4461] ipam/ipam.go 511: Trying affinity for 192.168.111.0/26 host="ci-4452.0.0-n-087888047c" Sep 9 04:56:41.274917 containerd[1866]: 2025-09-09 04:56:41.212 [INFO][4461] ipam/ipam.go 158: Attempting to load block cidr=192.168.111.0/26 host="ci-4452.0.0-n-087888047c" Sep 9 04:56:41.274917 containerd[1866]: 2025-09-09 04:56:41.214 [INFO][4461] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.111.0/26 host="ci-4452.0.0-n-087888047c" Sep 9 04:56:41.275066 containerd[1866]: 2025-09-09 04:56:41.214 [INFO][4461] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.111.0/26 handle="k8s-pod-network.6b76204070a7ba2428e5997faaf86a95d3c1eb4a1b8c657252654af90cac7073" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:41.275066 containerd[1866]: 2025-09-09 04:56:41.216 [INFO][4461] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6b76204070a7ba2428e5997faaf86a95d3c1eb4a1b8c657252654af90cac7073 Sep 9 04:56:41.275066 containerd[1866]: 2025-09-09 04:56:41.220 [INFO][4461] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.111.0/26 handle="k8s-pod-network.6b76204070a7ba2428e5997faaf86a95d3c1eb4a1b8c657252654af90cac7073" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:41.275066 containerd[1866]: 2025-09-09 04:56:41.230 [INFO][4461] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.111.1/26] block=192.168.111.0/26 handle="k8s-pod-network.6b76204070a7ba2428e5997faaf86a95d3c1eb4a1b8c657252654af90cac7073" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:41.275066 containerd[1866]: 2025-09-09 04:56:41.231 [INFO][4461] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.111.1/26] handle="k8s-pod-network.6b76204070a7ba2428e5997faaf86a95d3c1eb4a1b8c657252654af90cac7073" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:41.275066 containerd[1866]: 2025-09-09 04:56:41.231 [INFO][4461] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:56:41.275066 containerd[1866]: 2025-09-09 04:56:41.231 [INFO][4461] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.111.1/26] IPv6=[] ContainerID="6b76204070a7ba2428e5997faaf86a95d3c1eb4a1b8c657252654af90cac7073" HandleID="k8s-pod-network.6b76204070a7ba2428e5997faaf86a95d3c1eb4a1b8c657252654af90cac7073" Workload="ci--4452.0.0--n--087888047c-k8s-whisker--78fdc4b84--h24zs-eth0" Sep 9 04:56:41.275156 containerd[1866]: 2025-09-09 04:56:41.233 [INFO][4449] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6b76204070a7ba2428e5997faaf86a95d3c1eb4a1b8c657252654af90cac7073" Namespace="calico-system" Pod="whisker-78fdc4b84-h24zs" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-whisker--78fdc4b84--h24zs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--087888047c-k8s-whisker--78fdc4b84--h24zs-eth0", GenerateName:"whisker-78fdc4b84-", Namespace:"calico-system", SelfLink:"", UID:"955b91f3-f1ea-4c71-87e0-f03e60d99868", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 56, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78fdc4b84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-087888047c", ContainerID:"", Pod:"whisker-78fdc4b84-h24zs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.111.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliad5f396892a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:56:41.275156 containerd[1866]: 2025-09-09 04:56:41.233 [INFO][4449] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.111.1/32] ContainerID="6b76204070a7ba2428e5997faaf86a95d3c1eb4a1b8c657252654af90cac7073" Namespace="calico-system" Pod="whisker-78fdc4b84-h24zs" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-whisker--78fdc4b84--h24zs-eth0" Sep 9 04:56:41.275204 containerd[1866]: 2025-09-09 04:56:41.233 [INFO][4449] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad5f396892a ContainerID="6b76204070a7ba2428e5997faaf86a95d3c1eb4a1b8c657252654af90cac7073" Namespace="calico-system" Pod="whisker-78fdc4b84-h24zs" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-whisker--78fdc4b84--h24zs-eth0" Sep 9 04:56:41.275204 containerd[1866]: 2025-09-09 04:56:41.256 [INFO][4449] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b76204070a7ba2428e5997faaf86a95d3c1eb4a1b8c657252654af90cac7073" Namespace="calico-system" Pod="whisker-78fdc4b84-h24zs" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-whisker--78fdc4b84--h24zs-eth0" Sep 9 04:56:41.275231 containerd[1866]: 2025-09-09 04:56:41.256 [INFO][4449] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6b76204070a7ba2428e5997faaf86a95d3c1eb4a1b8c657252654af90cac7073" Namespace="calico-system" Pod="whisker-78fdc4b84-h24zs" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-whisker--78fdc4b84--h24zs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--087888047c-k8s-whisker--78fdc4b84--h24zs-eth0", GenerateName:"whisker-78fdc4b84-", Namespace:"calico-system", SelfLink:"", UID:"955b91f3-f1ea-4c71-87e0-f03e60d99868", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 56, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78fdc4b84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-087888047c", ContainerID:"6b76204070a7ba2428e5997faaf86a95d3c1eb4a1b8c657252654af90cac7073", Pod:"whisker-78fdc4b84-h24zs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.111.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliad5f396892a", MAC:"6e:06:34:98:c6:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:56:41.275262 containerd[1866]: 2025-09-09 04:56:41.273 [INFO][4449] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6b76204070a7ba2428e5997faaf86a95d3c1eb4a1b8c657252654af90cac7073" Namespace="calico-system" Pod="whisker-78fdc4b84-h24zs" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-whisker--78fdc4b84--h24zs-eth0" Sep 9 04:56:41.321427 containerd[1866]: time="2025-09-09T04:56:41.321377356Z" level=info msg="connecting to shim 6b76204070a7ba2428e5997faaf86a95d3c1eb4a1b8c657252654af90cac7073" address="unix:///run/containerd/s/0cce61ad06ecbb9e9e6f272bd4592e91503d991b7187e5592eebbe4715ec4c2b" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:56:41.340162 systemd[1]: Started cri-containerd-6b76204070a7ba2428e5997faaf86a95d3c1eb4a1b8c657252654af90cac7073.scope - libcontainer container 6b76204070a7ba2428e5997faaf86a95d3c1eb4a1b8c657252654af90cac7073. Sep 9 04:56:41.368875 containerd[1866]: time="2025-09-09T04:56:41.368824636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78fdc4b84-h24zs,Uid:955b91f3-f1ea-4c71-87e0-f03e60d99868,Namespace:calico-system,Attempt:0,} returns sandbox id \"6b76204070a7ba2428e5997faaf86a95d3c1eb4a1b8c657252654af90cac7073\"" Sep 9 04:56:41.371556 containerd[1866]: time="2025-09-09T04:56:41.371530936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 9 04:56:41.899507 containerd[1866]: time="2025-09-09T04:56:41.899464333Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2fef469f821f5a8a3c89f16be6a251680bf2e8dee3b39f9b54d9592cad224321\" id:\"d8ddbe99aef34933021ba463063b8913ac1ba2d320c1bf04dbc70b33d79f723a\" pid:4622 exit_status:1 exited_at:{seconds:1757393801 nanos:899169412}" Sep 9 04:56:42.641466 kubelet[3383]: I0909 04:56:42.641316 3383 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c93fd98f-5990-467a-982e-edb4ef4740be" path="/var/lib/kubelet/pods/c93fd98f-5990-467a-982e-edb4ef4740be/volumes" Sep 9 04:56:42.885212 systemd-networkd[1687]: caliad5f396892a: Gained IPv6LL Sep 9 04:56:43.356625 containerd[1866]: time="2025-09-09T04:56:43.356571874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:43.360416 containerd[1866]: time="2025-09-09T04:56:43.360232303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 9 04:56:43.364320 containerd[1866]: time="2025-09-09T04:56:43.364286937Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:43.368712 containerd[1866]: time="2025-09-09T04:56:43.368683526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:43.369297 containerd[1866]: time="2025-09-09T04:56:43.368974104Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.99741623s" Sep 9 04:56:43.369297 containerd[1866]: time="2025-09-09T04:56:43.369020169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 9 04:56:43.371630 containerd[1866]: time="2025-09-09T04:56:43.371247241Z" level=info msg="CreateContainer within sandbox \"6b76204070a7ba2428e5997faaf86a95d3c1eb4a1b8c657252654af90cac7073\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 9 04:56:43.392233 containerd[1866]: time="2025-09-09T04:56:43.392202489Z" level=info msg="Container b6007c86b40b0b63345e1670f5aac0abf7633b4392fd013995447a58fb260b56: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:56:43.393828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount989819427.mount: Deactivated successfully. Sep 9 04:56:43.412304 containerd[1866]: time="2025-09-09T04:56:43.412263044Z" level=info msg="CreateContainer within sandbox \"6b76204070a7ba2428e5997faaf86a95d3c1eb4a1b8c657252654af90cac7073\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"b6007c86b40b0b63345e1670f5aac0abf7633b4392fd013995447a58fb260b56\"" Sep 9 04:56:43.412805 containerd[1866]: time="2025-09-09T04:56:43.412765796Z" level=info msg="StartContainer for \"b6007c86b40b0b63345e1670f5aac0abf7633b4392fd013995447a58fb260b56\"" Sep 9 04:56:43.413784 containerd[1866]: time="2025-09-09T04:56:43.413756300Z" level=info msg="connecting to shim b6007c86b40b0b63345e1670f5aac0abf7633b4392fd013995447a58fb260b56" address="unix:///run/containerd/s/0cce61ad06ecbb9e9e6f272bd4592e91503d991b7187e5592eebbe4715ec4c2b" protocol=ttrpc version=3 Sep 9 04:56:43.432153 systemd[1]: Started cri-containerd-b6007c86b40b0b63345e1670f5aac0abf7633b4392fd013995447a58fb260b56.scope - libcontainer container b6007c86b40b0b63345e1670f5aac0abf7633b4392fd013995447a58fb260b56. Sep 9 04:56:43.464470 containerd[1866]: time="2025-09-09T04:56:43.464379604Z" level=info msg="StartContainer for \"b6007c86b40b0b63345e1670f5aac0abf7633b4392fd013995447a58fb260b56\" returns successfully" Sep 9 04:56:43.467465 containerd[1866]: time="2025-09-09T04:56:43.467126708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 9 04:56:44.136507 kubelet[3383]: I0909 04:56:44.136141 3383 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 04:56:45.026338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount61227434.mount: Deactivated successfully. Sep 9 04:56:45.132149 containerd[1866]: time="2025-09-09T04:56:45.132086065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:45.138248 containerd[1866]: time="2025-09-09T04:56:45.138089329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 9 04:56:45.142246 containerd[1866]: time="2025-09-09T04:56:45.142140166Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:45.146189 containerd[1866]: time="2025-09-09T04:56:45.146138737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:45.147151 containerd[1866]: time="2025-09-09T04:56:45.146872695Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.679715626s" Sep 9 04:56:45.147151 containerd[1866]: time="2025-09-09T04:56:45.146903672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 9 04:56:45.149857 containerd[1866]: time="2025-09-09T04:56:45.149829930Z" level=info msg="CreateContainer within sandbox \"6b76204070a7ba2428e5997faaf86a95d3c1eb4a1b8c657252654af90cac7073\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 9 04:56:45.178921 containerd[1866]: time="2025-09-09T04:56:45.178052422Z" level=info msg="Container 24723d0552d674d0682e558af0104a7a5ff6b4268e70ba20275b138a4999f8e8: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:56:45.204906 containerd[1866]: time="2025-09-09T04:56:45.204111184Z" level=info msg="CreateContainer within sandbox \"6b76204070a7ba2428e5997faaf86a95d3c1eb4a1b8c657252654af90cac7073\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"24723d0552d674d0682e558af0104a7a5ff6b4268e70ba20275b138a4999f8e8\"" Sep 9 04:56:45.208203 containerd[1866]: time="2025-09-09T04:56:45.206902046Z" level=info msg="StartContainer for \"24723d0552d674d0682e558af0104a7a5ff6b4268e70ba20275b138a4999f8e8\"" Sep 9 04:56:45.208203 containerd[1866]: time="2025-09-09T04:56:45.207748368Z" level=info msg="connecting to shim 24723d0552d674d0682e558af0104a7a5ff6b4268e70ba20275b138a4999f8e8" address="unix:///run/containerd/s/0cce61ad06ecbb9e9e6f272bd4592e91503d991b7187e5592eebbe4715ec4c2b" protocol=ttrpc version=3 Sep 9 04:56:45.230643 systemd[1]: Started cri-containerd-24723d0552d674d0682e558af0104a7a5ff6b4268e70ba20275b138a4999f8e8.scope - libcontainer container 24723d0552d674d0682e558af0104a7a5ff6b4268e70ba20275b138a4999f8e8. Sep 9 04:56:45.281902 containerd[1866]: time="2025-09-09T04:56:45.281795989Z" level=info msg="StartContainer for \"24723d0552d674d0682e558af0104a7a5ff6b4268e70ba20275b138a4999f8e8\" returns successfully" Sep 9 04:56:45.509280 systemd-networkd[1687]: vxlan.calico: Link UP Sep 9 04:56:45.509288 systemd-networkd[1687]: vxlan.calico: Gained carrier Sep 9 04:56:45.778862 kubelet[3383]: I0909 04:56:45.778615 3383 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-78fdc4b84-h24zs" podStartSLOduration=2.000719684 podStartE2EDuration="5.778596661s" podCreationTimestamp="2025-09-09 04:56:40 +0000 UTC" firstStartedPulling="2025-09-09 04:56:41.370045143 +0000 UTC m=+34.876996084" lastFinishedPulling="2025-09-09 04:56:45.147922112 +0000 UTC m=+38.654873061" observedRunningTime="2025-09-09 04:56:45.777826078 +0000 UTC m=+39.284777027" watchObservedRunningTime="2025-09-09 04:56:45.778596661 +0000 UTC m=+39.285547610" Sep 9 04:56:46.640256 containerd[1866]: time="2025-09-09T04:56:46.640182609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5nxp2,Uid:4e8b5849-a641-4cf1-9131-40b61016147b,Namespace:kube-system,Attempt:0,}" Sep 9 04:56:46.641225 containerd[1866]: time="2025-09-09T04:56:46.640776163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-646f77f6b4-ml9lp,Uid:d841bb99-115e-474c-93ff-3dec18b6a9a8,Namespace:calico-apiserver,Attempt:0,}" Sep 9 04:56:46.641225 containerd[1866]: time="2025-09-09T04:56:46.640184033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-646f77f6b4-tb4sw,Uid:1ee9be11-f0c2-4be0-9b26-0b300acc210d,Namespace:calico-apiserver,Attempt:0,}" Sep 9 04:56:46.808196 systemd-networkd[1687]: calia92f0f44c70: Link UP Sep 9 04:56:46.808703 systemd-networkd[1687]: calia92f0f44c70: Gained carrier Sep 9 04:56:46.828357 containerd[1866]: 2025-09-09 04:56:46.695 [INFO][4896] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452.0.0--n--087888047c-k8s-coredns--668d6bf9bc--5nxp2-eth0 coredns-668d6bf9bc- kube-system 4e8b5849-a641-4cf1-9131-40b61016147b 778 0 2025-09-09 04:56:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4452.0.0-n-087888047c coredns-668d6bf9bc-5nxp2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia92f0f44c70 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8c117adb4ff67dff649dd5f3c5209ebc64fd889017cb56cfd233eeee2ca8b986" Namespace="kube-system" Pod="coredns-668d6bf9bc-5nxp2" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-coredns--668d6bf9bc--5nxp2-" Sep 9 04:56:46.828357 containerd[1866]: 2025-09-09 04:56:46.695 [INFO][4896] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8c117adb4ff67dff649dd5f3c5209ebc64fd889017cb56cfd233eeee2ca8b986" Namespace="kube-system" Pod="coredns-668d6bf9bc-5nxp2" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-coredns--668d6bf9bc--5nxp2-eth0" Sep 9 04:56:46.828357 containerd[1866]: 2025-09-09 04:56:46.742 [INFO][4928] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c117adb4ff67dff649dd5f3c5209ebc64fd889017cb56cfd233eeee2ca8b986" HandleID="k8s-pod-network.8c117adb4ff67dff649dd5f3c5209ebc64fd889017cb56cfd233eeee2ca8b986" Workload="ci--4452.0.0--n--087888047c-k8s-coredns--668d6bf9bc--5nxp2-eth0" Sep 9 04:56:46.828554 containerd[1866]: 2025-09-09 04:56:46.743 [INFO][4928] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8c117adb4ff67dff649dd5f3c5209ebc64fd889017cb56cfd233eeee2ca8b986" HandleID="k8s-pod-network.8c117adb4ff67dff649dd5f3c5209ebc64fd889017cb56cfd233eeee2ca8b986" Workload="ci--4452.0.0--n--087888047c-k8s-coredns--668d6bf9bc--5nxp2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb610), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4452.0.0-n-087888047c", "pod":"coredns-668d6bf9bc-5nxp2", "timestamp":"2025-09-09 04:56:46.742853407 +0000 UTC"}, Hostname:"ci-4452.0.0-n-087888047c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:56:46.828554 containerd[1866]: 2025-09-09 04:56:46.743 [INFO][4928] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:56:46.828554 containerd[1866]: 2025-09-09 04:56:46.743 [INFO][4928] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:56:46.828554 containerd[1866]: 2025-09-09 04:56:46.743 [INFO][4928] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452.0.0-n-087888047c' Sep 9 04:56:46.828554 containerd[1866]: 2025-09-09 04:56:46.755 [INFO][4928] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8c117adb4ff67dff649dd5f3c5209ebc64fd889017cb56cfd233eeee2ca8b986" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:46.828554 containerd[1866]: 2025-09-09 04:56:46.764 [INFO][4928] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452.0.0-n-087888047c" Sep 9 04:56:46.828554 containerd[1866]: 2025-09-09 04:56:46.774 [INFO][4928] ipam/ipam.go 511: Trying affinity for 192.168.111.0/26 host="ci-4452.0.0-n-087888047c" Sep 9 04:56:46.828554 containerd[1866]: 2025-09-09 04:56:46.777 [INFO][4928] ipam/ipam.go 158: Attempting to load block cidr=192.168.111.0/26 host="ci-4452.0.0-n-087888047c" Sep 9 04:56:46.828554 containerd[1866]: 2025-09-09 04:56:46.780 [INFO][4928] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.111.0/26 host="ci-4452.0.0-n-087888047c" Sep 9 04:56:46.828684 containerd[1866]: 2025-09-09 04:56:46.780 [INFO][4928] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.111.0/26 handle="k8s-pod-network.8c117adb4ff67dff649dd5f3c5209ebc64fd889017cb56cfd233eeee2ca8b986" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:46.828684 containerd[1866]: 2025-09-09 04:56:46.782 [INFO][4928] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8c117adb4ff67dff649dd5f3c5209ebc64fd889017cb56cfd233eeee2ca8b986 Sep 9 04:56:46.828684 containerd[1866]: 2025-09-09 04:56:46.797 [INFO][4928] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.111.0/26 handle="k8s-pod-network.8c117adb4ff67dff649dd5f3c5209ebc64fd889017cb56cfd233eeee2ca8b986" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:46.828684 containerd[1866]: 2025-09-09 04:56:46.802 [INFO][4928] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.111.2/26] block=192.168.111.0/26 handle="k8s-pod-network.8c117adb4ff67dff649dd5f3c5209ebc64fd889017cb56cfd233eeee2ca8b986" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:46.828684 containerd[1866]: 2025-09-09 04:56:46.802 [INFO][4928] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.111.2/26] handle="k8s-pod-network.8c117adb4ff67dff649dd5f3c5209ebc64fd889017cb56cfd233eeee2ca8b986" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:46.828684 containerd[1866]: 2025-09-09 04:56:46.802 [INFO][4928] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:56:46.828684 containerd[1866]: 2025-09-09 04:56:46.802 [INFO][4928] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.111.2/26] IPv6=[] ContainerID="8c117adb4ff67dff649dd5f3c5209ebc64fd889017cb56cfd233eeee2ca8b986" HandleID="k8s-pod-network.8c117adb4ff67dff649dd5f3c5209ebc64fd889017cb56cfd233eeee2ca8b986" Workload="ci--4452.0.0--n--087888047c-k8s-coredns--668d6bf9bc--5nxp2-eth0" Sep 9 04:56:46.828821 containerd[1866]: 2025-09-09 04:56:46.806 [INFO][4896] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8c117adb4ff67dff649dd5f3c5209ebc64fd889017cb56cfd233eeee2ca8b986" Namespace="kube-system" Pod="coredns-668d6bf9bc-5nxp2" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-coredns--668d6bf9bc--5nxp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--087888047c-k8s-coredns--668d6bf9bc--5nxp2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4e8b5849-a641-4cf1-9131-40b61016147b", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 56, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-087888047c", ContainerID:"", Pod:"coredns-668d6bf9bc-5nxp2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia92f0f44c70", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:56:46.828821 containerd[1866]: 2025-09-09 04:56:46.806 [INFO][4896] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.111.2/32] ContainerID="8c117adb4ff67dff649dd5f3c5209ebc64fd889017cb56cfd233eeee2ca8b986" Namespace="kube-system" Pod="coredns-668d6bf9bc-5nxp2" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-coredns--668d6bf9bc--5nxp2-eth0" Sep 9 04:56:46.828821 containerd[1866]: 2025-09-09 04:56:46.806 [INFO][4896] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia92f0f44c70 ContainerID="8c117adb4ff67dff649dd5f3c5209ebc64fd889017cb56cfd233eeee2ca8b986" Namespace="kube-system" Pod="coredns-668d6bf9bc-5nxp2" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-coredns--668d6bf9bc--5nxp2-eth0" Sep 9 04:56:46.828821 containerd[1866]: 2025-09-09 04:56:46.809 [INFO][4896] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c117adb4ff67dff649dd5f3c5209ebc64fd889017cb56cfd233eeee2ca8b986" Namespace="kube-system" Pod="coredns-668d6bf9bc-5nxp2" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-coredns--668d6bf9bc--5nxp2-eth0" Sep 9 04:56:46.828821 containerd[1866]: 2025-09-09 04:56:46.811 [INFO][4896] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8c117adb4ff67dff649dd5f3c5209ebc64fd889017cb56cfd233eeee2ca8b986" Namespace="kube-system" Pod="coredns-668d6bf9bc-5nxp2" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-coredns--668d6bf9bc--5nxp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--087888047c-k8s-coredns--668d6bf9bc--5nxp2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4e8b5849-a641-4cf1-9131-40b61016147b", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 56, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-087888047c", ContainerID:"8c117adb4ff67dff649dd5f3c5209ebc64fd889017cb56cfd233eeee2ca8b986", Pod:"coredns-668d6bf9bc-5nxp2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia92f0f44c70", MAC:"2a:54:cb:38:d8:c8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:56:46.828821 containerd[1866]: 2025-09-09 04:56:46.823 [INFO][4896] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8c117adb4ff67dff649dd5f3c5209ebc64fd889017cb56cfd233eeee2ca8b986" Namespace="kube-system" Pod="coredns-668d6bf9bc-5nxp2" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-coredns--668d6bf9bc--5nxp2-eth0" Sep 9 04:56:46.890770 containerd[1866]: time="2025-09-09T04:56:46.890572726Z" level=info msg="connecting to shim 8c117adb4ff67dff649dd5f3c5209ebc64fd889017cb56cfd233eeee2ca8b986" address="unix:///run/containerd/s/1069011161cacf3d7ddce433798fdf6f5afd7b68c468d0b4e2dcaa39412e53a2" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:56:46.912436 systemd-networkd[1687]: cali4f463448632: Link UP Sep 9 04:56:46.914816 systemd-networkd[1687]: cali4f463448632: Gained carrier Sep 9 04:56:46.930150 systemd[1]: Started cri-containerd-8c117adb4ff67dff649dd5f3c5209ebc64fd889017cb56cfd233eeee2ca8b986.scope - libcontainer container 8c117adb4ff67dff649dd5f3c5209ebc64fd889017cb56cfd233eeee2ca8b986. Sep 9 04:56:46.934697 containerd[1866]: 2025-09-09 04:56:46.723 [INFO][4915] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452.0.0--n--087888047c-k8s-calico--apiserver--646f77f6b4--tb4sw-eth0 calico-apiserver-646f77f6b4- calico-apiserver 1ee9be11-f0c2-4be0-9b26-0b300acc210d 789 0 2025-09-09 04:56:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:646f77f6b4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4452.0.0-n-087888047c calico-apiserver-646f77f6b4-tb4sw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4f463448632 [] [] }} ContainerID="e56d84dd57dd4863b6c662401e80d1bf26316af41921f0aa9e1377ce90e7719a" Namespace="calico-apiserver" Pod="calico-apiserver-646f77f6b4-tb4sw" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-calico--apiserver--646f77f6b4--tb4sw-" Sep 9 04:56:46.934697 containerd[1866]: 2025-09-09 04:56:46.723 [INFO][4915] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e56d84dd57dd4863b6c662401e80d1bf26316af41921f0aa9e1377ce90e7719a" Namespace="calico-apiserver" Pod="calico-apiserver-646f77f6b4-tb4sw" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-calico--apiserver--646f77f6b4--tb4sw-eth0" Sep 9 04:56:46.934697 containerd[1866]: 2025-09-09 04:56:46.772 [INFO][4936] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e56d84dd57dd4863b6c662401e80d1bf26316af41921f0aa9e1377ce90e7719a" HandleID="k8s-pod-network.e56d84dd57dd4863b6c662401e80d1bf26316af41921f0aa9e1377ce90e7719a" Workload="ci--4452.0.0--n--087888047c-k8s-calico--apiserver--646f77f6b4--tb4sw-eth0" Sep 9 04:56:46.934697 containerd[1866]: 2025-09-09 04:56:46.773 [INFO][4936] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e56d84dd57dd4863b6c662401e80d1bf26316af41921f0aa9e1377ce90e7719a" HandleID="k8s-pod-network.e56d84dd57dd4863b6c662401e80d1bf26316af41921f0aa9e1377ce90e7719a" Workload="ci--4452.0.0--n--087888047c-k8s-calico--apiserver--646f77f6b4--tb4sw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d640), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4452.0.0-n-087888047c", "pod":"calico-apiserver-646f77f6b4-tb4sw", "timestamp":"2025-09-09 04:56:46.772465278 +0000 UTC"}, Hostname:"ci-4452.0.0-n-087888047c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:56:46.934697 containerd[1866]: 2025-09-09 04:56:46.773 [INFO][4936] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:56:46.934697 containerd[1866]: 2025-09-09 04:56:46.802 [INFO][4936] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:56:46.934697 containerd[1866]: 2025-09-09 04:56:46.803 [INFO][4936] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452.0.0-n-087888047c' Sep 9 04:56:46.934697 containerd[1866]: 2025-09-09 04:56:46.856 [INFO][4936] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e56d84dd57dd4863b6c662401e80d1bf26316af41921f0aa9e1377ce90e7719a" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:46.934697 containerd[1866]: 2025-09-09 04:56:46.863 [INFO][4936] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452.0.0-n-087888047c" Sep 9 04:56:46.934697 containerd[1866]: 2025-09-09 04:56:46.869 [INFO][4936] ipam/ipam.go 511: Trying affinity for 192.168.111.0/26 host="ci-4452.0.0-n-087888047c" Sep 9 04:56:46.934697 containerd[1866]: 2025-09-09 04:56:46.871 [INFO][4936] ipam/ipam.go 158: Attempting to load block cidr=192.168.111.0/26 host="ci-4452.0.0-n-087888047c" Sep 9 04:56:46.934697 containerd[1866]: 2025-09-09 04:56:46.873 [INFO][4936] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.111.0/26 host="ci-4452.0.0-n-087888047c" Sep 9 04:56:46.934697 containerd[1866]: 2025-09-09 04:56:46.874 [INFO][4936] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.111.0/26 handle="k8s-pod-network.e56d84dd57dd4863b6c662401e80d1bf26316af41921f0aa9e1377ce90e7719a" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:46.934697 containerd[1866]: 2025-09-09 04:56:46.884 [INFO][4936] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e56d84dd57dd4863b6c662401e80d1bf26316af41921f0aa9e1377ce90e7719a Sep 9 04:56:46.934697 containerd[1866]: 2025-09-09 04:56:46.889 [INFO][4936] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.111.0/26 handle="k8s-pod-network.e56d84dd57dd4863b6c662401e80d1bf26316af41921f0aa9e1377ce90e7719a" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:46.934697 containerd[1866]: 2025-09-09 04:56:46.900 [INFO][4936] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.111.3/26] block=192.168.111.0/26 handle="k8s-pod-network.e56d84dd57dd4863b6c662401e80d1bf26316af41921f0aa9e1377ce90e7719a" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:46.934697 containerd[1866]: 2025-09-09 04:56:46.900 [INFO][4936] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.111.3/26] handle="k8s-pod-network.e56d84dd57dd4863b6c662401e80d1bf26316af41921f0aa9e1377ce90e7719a" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:46.934697 containerd[1866]: 2025-09-09 04:56:46.900 [INFO][4936] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:56:46.934697 containerd[1866]: 2025-09-09 04:56:46.900 [INFO][4936] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.111.3/26] IPv6=[] ContainerID="e56d84dd57dd4863b6c662401e80d1bf26316af41921f0aa9e1377ce90e7719a" HandleID="k8s-pod-network.e56d84dd57dd4863b6c662401e80d1bf26316af41921f0aa9e1377ce90e7719a" Workload="ci--4452.0.0--n--087888047c-k8s-calico--apiserver--646f77f6b4--tb4sw-eth0" Sep 9 04:56:46.935898 containerd[1866]: 2025-09-09 04:56:46.906 [INFO][4915] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e56d84dd57dd4863b6c662401e80d1bf26316af41921f0aa9e1377ce90e7719a" Namespace="calico-apiserver" Pod="calico-apiserver-646f77f6b4-tb4sw" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-calico--apiserver--646f77f6b4--tb4sw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--087888047c-k8s-calico--apiserver--646f77f6b4--tb4sw-eth0", GenerateName:"calico-apiserver-646f77f6b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"1ee9be11-f0c2-4be0-9b26-0b300acc210d", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 56, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"646f77f6b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-087888047c", ContainerID:"", Pod:"calico-apiserver-646f77f6b4-tb4sw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f463448632", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:56:46.935898 containerd[1866]: 2025-09-09 04:56:46.908 [INFO][4915] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.111.3/32] ContainerID="e56d84dd57dd4863b6c662401e80d1bf26316af41921f0aa9e1377ce90e7719a" Namespace="calico-apiserver" Pod="calico-apiserver-646f77f6b4-tb4sw" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-calico--apiserver--646f77f6b4--tb4sw-eth0" Sep 9 04:56:46.935898 containerd[1866]: 2025-09-09 04:56:46.908 [INFO][4915] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f463448632 ContainerID="e56d84dd57dd4863b6c662401e80d1bf26316af41921f0aa9e1377ce90e7719a" Namespace="calico-apiserver" Pod="calico-apiserver-646f77f6b4-tb4sw" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-calico--apiserver--646f77f6b4--tb4sw-eth0" Sep 9 04:56:46.935898 containerd[1866]: 2025-09-09 04:56:46.915 [INFO][4915] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e56d84dd57dd4863b6c662401e80d1bf26316af41921f0aa9e1377ce90e7719a" Namespace="calico-apiserver" Pod="calico-apiserver-646f77f6b4-tb4sw" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-calico--apiserver--646f77f6b4--tb4sw-eth0" Sep 9 04:56:46.935898 containerd[1866]: 2025-09-09 04:56:46.917 [INFO][4915] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e56d84dd57dd4863b6c662401e80d1bf26316af41921f0aa9e1377ce90e7719a" Namespace="calico-apiserver" Pod="calico-apiserver-646f77f6b4-tb4sw" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-calico--apiserver--646f77f6b4--tb4sw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--087888047c-k8s-calico--apiserver--646f77f6b4--tb4sw-eth0", GenerateName:"calico-apiserver-646f77f6b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"1ee9be11-f0c2-4be0-9b26-0b300acc210d", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 56, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"646f77f6b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-087888047c", ContainerID:"e56d84dd57dd4863b6c662401e80d1bf26316af41921f0aa9e1377ce90e7719a", Pod:"calico-apiserver-646f77f6b4-tb4sw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f463448632", MAC:"6a:72:87:d6:fa:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:56:46.935898 containerd[1866]: 2025-09-09 04:56:46.929 [INFO][4915] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e56d84dd57dd4863b6c662401e80d1bf26316af41921f0aa9e1377ce90e7719a" Namespace="calico-apiserver" Pod="calico-apiserver-646f77f6b4-tb4sw" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-calico--apiserver--646f77f6b4--tb4sw-eth0" Sep 9 04:56:46.982355 containerd[1866]: time="2025-09-09T04:56:46.982252002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5nxp2,Uid:4e8b5849-a641-4cf1-9131-40b61016147b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c117adb4ff67dff649dd5f3c5209ebc64fd889017cb56cfd233eeee2ca8b986\"" Sep 9 04:56:46.984657 containerd[1866]: time="2025-09-09T04:56:46.984452110Z" level=info msg="CreateContainer within sandbox \"8c117adb4ff67dff649dd5f3c5209ebc64fd889017cb56cfd233eeee2ca8b986\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 04:56:46.998748 containerd[1866]: time="2025-09-09T04:56:46.998718133Z" level=info msg="connecting to shim e56d84dd57dd4863b6c662401e80d1bf26316af41921f0aa9e1377ce90e7719a" address="unix:///run/containerd/s/07c588f84cc92c2d6f6b06310cffc66d46648c7e286ea387fa8077f02be01378" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:56:47.012074 systemd-networkd[1687]: cali3044e42f9a8: Link UP Sep 9 04:56:47.012797 systemd-networkd[1687]: cali3044e42f9a8: Gained carrier Sep 9 04:56:47.019646 containerd[1866]: time="2025-09-09T04:56:47.019613407Z" level=info msg="Container 7b27d3d06eb5dc9bd5f1888deac26de6085b527329a7431d3575fb190a94c405: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:56:47.031898 systemd[1]: Started cri-containerd-e56d84dd57dd4863b6c662401e80d1bf26316af41921f0aa9e1377ce90e7719a.scope - libcontainer container e56d84dd57dd4863b6c662401e80d1bf26316af41921f0aa9e1377ce90e7719a. Sep 9 04:56:47.039116 containerd[1866]: 2025-09-09 04:56:46.726 [INFO][4905] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452.0.0--n--087888047c-k8s-calico--apiserver--646f77f6b4--ml9lp-eth0 calico-apiserver-646f77f6b4- calico-apiserver d841bb99-115e-474c-93ff-3dec18b6a9a8 785 0 2025-09-09 04:56:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:646f77f6b4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4452.0.0-n-087888047c calico-apiserver-646f77f6b4-ml9lp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3044e42f9a8 [] [] }} ContainerID="bf4edd1830cee4582382cdbd65f955c683e062faa88ef6bc5ff69bf537f5c4f2" Namespace="calico-apiserver" Pod="calico-apiserver-646f77f6b4-ml9lp" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-calico--apiserver--646f77f6b4--ml9lp-" Sep 9 04:56:47.039116 containerd[1866]: 2025-09-09 04:56:46.728 [INFO][4905] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bf4edd1830cee4582382cdbd65f955c683e062faa88ef6bc5ff69bf537f5c4f2" Namespace="calico-apiserver" Pod="calico-apiserver-646f77f6b4-ml9lp" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-calico--apiserver--646f77f6b4--ml9lp-eth0" Sep 9 04:56:47.039116 containerd[1866]: 2025-09-09 04:56:46.791 [INFO][4941] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bf4edd1830cee4582382cdbd65f955c683e062faa88ef6bc5ff69bf537f5c4f2" HandleID="k8s-pod-network.bf4edd1830cee4582382cdbd65f955c683e062faa88ef6bc5ff69bf537f5c4f2" Workload="ci--4452.0.0--n--087888047c-k8s-calico--apiserver--646f77f6b4--ml9lp-eth0" Sep 9 04:56:47.039116 containerd[1866]: 2025-09-09 04:56:46.791 [INFO][4941] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bf4edd1830cee4582382cdbd65f955c683e062faa88ef6bc5ff69bf537f5c4f2" HandleID="k8s-pod-network.bf4edd1830cee4582382cdbd65f955c683e062faa88ef6bc5ff69bf537f5c4f2" Workload="ci--4452.0.0--n--087888047c-k8s-calico--apiserver--646f77f6b4--ml9lp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d36c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4452.0.0-n-087888047c", "pod":"calico-apiserver-646f77f6b4-ml9lp", "timestamp":"2025-09-09 04:56:46.791584762 +0000 UTC"}, Hostname:"ci-4452.0.0-n-087888047c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:56:47.039116 containerd[1866]: 2025-09-09 04:56:46.791 [INFO][4941] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:56:47.039116 containerd[1866]: 2025-09-09 04:56:46.900 [INFO][4941] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:56:47.039116 containerd[1866]: 2025-09-09 04:56:46.900 [INFO][4941] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452.0.0-n-087888047c' Sep 9 04:56:47.039116 containerd[1866]: 2025-09-09 04:56:46.958 [INFO][4941] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bf4edd1830cee4582382cdbd65f955c683e062faa88ef6bc5ff69bf537f5c4f2" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:47.039116 containerd[1866]: 2025-09-09 04:56:46.965 [INFO][4941] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452.0.0-n-087888047c" Sep 9 04:56:47.039116 containerd[1866]: 2025-09-09 04:56:46.970 [INFO][4941] ipam/ipam.go 511: Trying affinity for 192.168.111.0/26 host="ci-4452.0.0-n-087888047c" Sep 9 04:56:47.039116 containerd[1866]: 2025-09-09 04:56:46.972 [INFO][4941] ipam/ipam.go 158: Attempting to load block cidr=192.168.111.0/26 host="ci-4452.0.0-n-087888047c" Sep 9 04:56:47.039116 containerd[1866]: 2025-09-09 04:56:46.978 [INFO][4941] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.111.0/26 host="ci-4452.0.0-n-087888047c" Sep 9 04:56:47.039116 containerd[1866]: 2025-09-09 04:56:46.978 [INFO][4941] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.111.0/26 handle="k8s-pod-network.bf4edd1830cee4582382cdbd65f955c683e062faa88ef6bc5ff69bf537f5c4f2" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:47.039116 containerd[1866]: 2025-09-09 04:56:46.979 [INFO][4941] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bf4edd1830cee4582382cdbd65f955c683e062faa88ef6bc5ff69bf537f5c4f2 Sep 9 04:56:47.039116 containerd[1866]: 2025-09-09 04:56:46.991 [INFO][4941] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.111.0/26 handle="k8s-pod-network.bf4edd1830cee4582382cdbd65f955c683e062faa88ef6bc5ff69bf537f5c4f2" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:47.039116 containerd[1866]: 2025-09-09 04:56:47.004 [INFO][4941] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.111.4/26] block=192.168.111.0/26 handle="k8s-pod-network.bf4edd1830cee4582382cdbd65f955c683e062faa88ef6bc5ff69bf537f5c4f2" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:47.039116 containerd[1866]: 2025-09-09 04:56:47.004 [INFO][4941] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.111.4/26] handle="k8s-pod-network.bf4edd1830cee4582382cdbd65f955c683e062faa88ef6bc5ff69bf537f5c4f2" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:47.039116 containerd[1866]: 2025-09-09 04:56:47.004 [INFO][4941] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:56:47.039116 containerd[1866]: 2025-09-09 04:56:47.005 [INFO][4941] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.111.4/26] IPv6=[] ContainerID="bf4edd1830cee4582382cdbd65f955c683e062faa88ef6bc5ff69bf537f5c4f2" HandleID="k8s-pod-network.bf4edd1830cee4582382cdbd65f955c683e062faa88ef6bc5ff69bf537f5c4f2" Workload="ci--4452.0.0--n--087888047c-k8s-calico--apiserver--646f77f6b4--ml9lp-eth0" Sep 9 04:56:47.042495 containerd[1866]: 2025-09-09 04:56:47.007 [INFO][4905] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bf4edd1830cee4582382cdbd65f955c683e062faa88ef6bc5ff69bf537f5c4f2" Namespace="calico-apiserver" Pod="calico-apiserver-646f77f6b4-ml9lp" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-calico--apiserver--646f77f6b4--ml9lp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--087888047c-k8s-calico--apiserver--646f77f6b4--ml9lp-eth0", GenerateName:"calico-apiserver-646f77f6b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"d841bb99-115e-474c-93ff-3dec18b6a9a8", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 56, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"646f77f6b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-087888047c", ContainerID:"", Pod:"calico-apiserver-646f77f6b4-ml9lp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3044e42f9a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:56:47.042495 containerd[1866]: 2025-09-09 04:56:47.007 [INFO][4905] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.111.4/32] ContainerID="bf4edd1830cee4582382cdbd65f955c683e062faa88ef6bc5ff69bf537f5c4f2" Namespace="calico-apiserver" Pod="calico-apiserver-646f77f6b4-ml9lp" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-calico--apiserver--646f77f6b4--ml9lp-eth0" Sep 9 04:56:47.042495 containerd[1866]: 2025-09-09 04:56:47.007 [INFO][4905] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3044e42f9a8 ContainerID="bf4edd1830cee4582382cdbd65f955c683e062faa88ef6bc5ff69bf537f5c4f2" Namespace="calico-apiserver" Pod="calico-apiserver-646f77f6b4-ml9lp" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-calico--apiserver--646f77f6b4--ml9lp-eth0" Sep 9 04:56:47.042495 containerd[1866]: 2025-09-09 04:56:47.013 [INFO][4905] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bf4edd1830cee4582382cdbd65f955c683e062faa88ef6bc5ff69bf537f5c4f2" Namespace="calico-apiserver" Pod="calico-apiserver-646f77f6b4-ml9lp" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-calico--apiserver--646f77f6b4--ml9lp-eth0" Sep 9 04:56:47.042495 containerd[1866]: 2025-09-09 04:56:47.015 [INFO][4905] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bf4edd1830cee4582382cdbd65f955c683e062faa88ef6bc5ff69bf537f5c4f2" Namespace="calico-apiserver" Pod="calico-apiserver-646f77f6b4-ml9lp" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-calico--apiserver--646f77f6b4--ml9lp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--087888047c-k8s-calico--apiserver--646f77f6b4--ml9lp-eth0", GenerateName:"calico-apiserver-646f77f6b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"d841bb99-115e-474c-93ff-3dec18b6a9a8", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 56, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"646f77f6b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-087888047c", ContainerID:"bf4edd1830cee4582382cdbd65f955c683e062faa88ef6bc5ff69bf537f5c4f2", Pod:"calico-apiserver-646f77f6b4-ml9lp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3044e42f9a8", MAC:"92:84:c5:10:71:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:56:47.042495 containerd[1866]: 2025-09-09 04:56:47.035 [INFO][4905] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bf4edd1830cee4582382cdbd65f955c683e062faa88ef6bc5ff69bf537f5c4f2" Namespace="calico-apiserver" Pod="calico-apiserver-646f77f6b4-ml9lp" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-calico--apiserver--646f77f6b4--ml9lp-eth0" Sep 9 04:56:47.047709 containerd[1866]: time="2025-09-09T04:56:47.046735313Z" level=info msg="CreateContainer within sandbox \"8c117adb4ff67dff649dd5f3c5209ebc64fd889017cb56cfd233eeee2ca8b986\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7b27d3d06eb5dc9bd5f1888deac26de6085b527329a7431d3575fb190a94c405\"" Sep 9 04:56:47.048218 containerd[1866]: time="2025-09-09T04:56:47.048183766Z" level=info msg="StartContainer for \"7b27d3d06eb5dc9bd5f1888deac26de6085b527329a7431d3575fb190a94c405\"" Sep 9 04:56:47.048776 containerd[1866]: time="2025-09-09T04:56:47.048750943Z" level=info msg="connecting to shim 7b27d3d06eb5dc9bd5f1888deac26de6085b527329a7431d3575fb190a94c405" address="unix:///run/containerd/s/1069011161cacf3d7ddce433798fdf6f5afd7b68c468d0b4e2dcaa39412e53a2" protocol=ttrpc version=3 Sep 9 04:56:47.067140 systemd[1]: Started cri-containerd-7b27d3d06eb5dc9bd5f1888deac26de6085b527329a7431d3575fb190a94c405.scope - libcontainer container 7b27d3d06eb5dc9bd5f1888deac26de6085b527329a7431d3575fb190a94c405. Sep 9 04:56:47.105305 containerd[1866]: time="2025-09-09T04:56:47.105267698Z" level=info msg="StartContainer for \"7b27d3d06eb5dc9bd5f1888deac26de6085b527329a7431d3575fb190a94c405\" returns successfully" Sep 9 04:56:47.111597 containerd[1866]: time="2025-09-09T04:56:47.111545947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-646f77f6b4-tb4sw,Uid:1ee9be11-f0c2-4be0-9b26-0b300acc210d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e56d84dd57dd4863b6c662401e80d1bf26316af41921f0aa9e1377ce90e7719a\"" Sep 9 04:56:47.115164 containerd[1866]: time="2025-09-09T04:56:47.115067167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 04:56:47.150213 containerd[1866]: time="2025-09-09T04:56:47.150054451Z" level=info msg="connecting to shim bf4edd1830cee4582382cdbd65f955c683e062faa88ef6bc5ff69bf537f5c4f2" address="unix:///run/containerd/s/4c3c32cfad580457e885f213943f4a21c5464a80e58987af00188941ff513d1a" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:56:47.173179 systemd[1]: Started cri-containerd-bf4edd1830cee4582382cdbd65f955c683e062faa88ef6bc5ff69bf537f5c4f2.scope - libcontainer container bf4edd1830cee4582382cdbd65f955c683e062faa88ef6bc5ff69bf537f5c4f2. Sep 9 04:56:47.203955 containerd[1866]: time="2025-09-09T04:56:47.203911124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-646f77f6b4-ml9lp,Uid:d841bb99-115e-474c-93ff-3dec18b6a9a8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"bf4edd1830cee4582382cdbd65f955c683e062faa88ef6bc5ff69bf537f5c4f2\"" Sep 9 04:56:47.301155 systemd-networkd[1687]: vxlan.calico: Gained IPv6LL Sep 9 04:56:47.808620 kubelet[3383]: I0909 04:56:47.808553 3383 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5nxp2" podStartSLOduration=35.808536392 podStartE2EDuration="35.808536392s" podCreationTimestamp="2025-09-09 04:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 04:56:47.788970174 +0000 UTC m=+41.295921115" watchObservedRunningTime="2025-09-09 04:56:47.808536392 +0000 UTC m=+41.315487333" Sep 9 04:56:47.877151 systemd-networkd[1687]: calia92f0f44c70: Gained IPv6LL Sep 9 04:56:48.325598 systemd-networkd[1687]: cali4f463448632: Gained IPv6LL Sep 9 04:56:48.639739 containerd[1866]: time="2025-09-09T04:56:48.639588673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-kfzzp,Uid:c59e2b93-fd42-4540-8fc2-a0a9c55c3504,Namespace:calico-system,Attempt:0,}" Sep 9 04:56:48.773922 systemd-networkd[1687]: cali3044e42f9a8: Gained IPv6LL Sep 9 04:56:48.775960 systemd-networkd[1687]: calic2b485130d4: Link UP Sep 9 04:56:48.777027 systemd-networkd[1687]: calic2b485130d4: Gained carrier Sep 9 04:56:48.796480 containerd[1866]: 2025-09-09 04:56:48.700 [INFO][5165] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452.0.0--n--087888047c-k8s-goldmane--54d579b49d--kfzzp-eth0 goldmane-54d579b49d- calico-system c59e2b93-fd42-4540-8fc2-a0a9c55c3504 788 0 2025-09-09 04:56:25 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4452.0.0-n-087888047c goldmane-54d579b49d-kfzzp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic2b485130d4 [] [] }} ContainerID="0641b3c0086d9b7b2d57d2f93cd5a6cd989c162cd48e73015e5a2f0757e97a9e" Namespace="calico-system" Pod="goldmane-54d579b49d-kfzzp" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-goldmane--54d579b49d--kfzzp-" Sep 9 04:56:48.796480 containerd[1866]: 2025-09-09 04:56:48.700 [INFO][5165] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0641b3c0086d9b7b2d57d2f93cd5a6cd989c162cd48e73015e5a2f0757e97a9e" Namespace="calico-system" Pod="goldmane-54d579b49d-kfzzp" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-goldmane--54d579b49d--kfzzp-eth0" Sep 9 04:56:48.796480 containerd[1866]: 2025-09-09 04:56:48.723 [INFO][5178] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0641b3c0086d9b7b2d57d2f93cd5a6cd989c162cd48e73015e5a2f0757e97a9e" HandleID="k8s-pod-network.0641b3c0086d9b7b2d57d2f93cd5a6cd989c162cd48e73015e5a2f0757e97a9e" Workload="ci--4452.0.0--n--087888047c-k8s-goldmane--54d579b49d--kfzzp-eth0" Sep 9 04:56:48.796480 containerd[1866]: 2025-09-09 04:56:48.724 [INFO][5178] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0641b3c0086d9b7b2d57d2f93cd5a6cd989c162cd48e73015e5a2f0757e97a9e" HandleID="k8s-pod-network.0641b3c0086d9b7b2d57d2f93cd5a6cd989c162cd48e73015e5a2f0757e97a9e" Workload="ci--4452.0.0--n--087888047c-k8s-goldmane--54d579b49d--kfzzp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3790), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4452.0.0-n-087888047c", "pod":"goldmane-54d579b49d-kfzzp", "timestamp":"2025-09-09 04:56:48.72388349 +0000 UTC"}, Hostname:"ci-4452.0.0-n-087888047c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:56:48.796480 containerd[1866]: 2025-09-09 04:56:48.724 [INFO][5178] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:56:48.796480 containerd[1866]: 2025-09-09 04:56:48.724 [INFO][5178] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:56:48.796480 containerd[1866]: 2025-09-09 04:56:48.724 [INFO][5178] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452.0.0-n-087888047c' Sep 9 04:56:48.796480 containerd[1866]: 2025-09-09 04:56:48.732 [INFO][5178] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0641b3c0086d9b7b2d57d2f93cd5a6cd989c162cd48e73015e5a2f0757e97a9e" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:48.796480 containerd[1866]: 2025-09-09 04:56:48.737 [INFO][5178] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452.0.0-n-087888047c" Sep 9 04:56:48.796480 containerd[1866]: 2025-09-09 04:56:48.743 [INFO][5178] ipam/ipam.go 511: Trying affinity for 192.168.111.0/26 host="ci-4452.0.0-n-087888047c" Sep 9 04:56:48.796480 containerd[1866]: 2025-09-09 04:56:48.745 [INFO][5178] ipam/ipam.go 158: Attempting to load block cidr=192.168.111.0/26 host="ci-4452.0.0-n-087888047c" Sep 9 04:56:48.796480 containerd[1866]: 2025-09-09 04:56:48.748 [INFO][5178] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.111.0/26 host="ci-4452.0.0-n-087888047c" Sep 9 04:56:48.796480 containerd[1866]: 2025-09-09 04:56:48.748 [INFO][5178] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.111.0/26 handle="k8s-pod-network.0641b3c0086d9b7b2d57d2f93cd5a6cd989c162cd48e73015e5a2f0757e97a9e" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:48.796480 containerd[1866]: 2025-09-09 04:56:48.750 [INFO][5178] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0641b3c0086d9b7b2d57d2f93cd5a6cd989c162cd48e73015e5a2f0757e97a9e Sep 9 04:56:48.796480 containerd[1866]: 2025-09-09 04:56:48.758 [INFO][5178] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.111.0/26 handle="k8s-pod-network.0641b3c0086d9b7b2d57d2f93cd5a6cd989c162cd48e73015e5a2f0757e97a9e" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:48.796480 containerd[1866]: 2025-09-09 04:56:48.769 [INFO][5178] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.111.5/26] block=192.168.111.0/26 handle="k8s-pod-network.0641b3c0086d9b7b2d57d2f93cd5a6cd989c162cd48e73015e5a2f0757e97a9e" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:48.796480 containerd[1866]: 2025-09-09 04:56:48.769 [INFO][5178] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.111.5/26] handle="k8s-pod-network.0641b3c0086d9b7b2d57d2f93cd5a6cd989c162cd48e73015e5a2f0757e97a9e" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:48.796480 containerd[1866]: 2025-09-09 04:56:48.769 [INFO][5178] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:56:48.796480 containerd[1866]: 2025-09-09 04:56:48.769 [INFO][5178] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.111.5/26] IPv6=[] ContainerID="0641b3c0086d9b7b2d57d2f93cd5a6cd989c162cd48e73015e5a2f0757e97a9e" HandleID="k8s-pod-network.0641b3c0086d9b7b2d57d2f93cd5a6cd989c162cd48e73015e5a2f0757e97a9e" Workload="ci--4452.0.0--n--087888047c-k8s-goldmane--54d579b49d--kfzzp-eth0" Sep 9 04:56:48.797201 containerd[1866]: 2025-09-09 04:56:48.772 [INFO][5165] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0641b3c0086d9b7b2d57d2f93cd5a6cd989c162cd48e73015e5a2f0757e97a9e" Namespace="calico-system" Pod="goldmane-54d579b49d-kfzzp" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-goldmane--54d579b49d--kfzzp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--087888047c-k8s-goldmane--54d579b49d--kfzzp-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"c59e2b93-fd42-4540-8fc2-a0a9c55c3504", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 56, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-087888047c", ContainerID:"", Pod:"goldmane-54d579b49d-kfzzp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.111.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic2b485130d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:56:48.797201 containerd[1866]: 2025-09-09 04:56:48.772 [INFO][5165] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.111.5/32] ContainerID="0641b3c0086d9b7b2d57d2f93cd5a6cd989c162cd48e73015e5a2f0757e97a9e" Namespace="calico-system" Pod="goldmane-54d579b49d-kfzzp" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-goldmane--54d579b49d--kfzzp-eth0" Sep 9 04:56:48.797201 containerd[1866]: 2025-09-09 04:56:48.772 [INFO][5165] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic2b485130d4 ContainerID="0641b3c0086d9b7b2d57d2f93cd5a6cd989c162cd48e73015e5a2f0757e97a9e" Namespace="calico-system" Pod="goldmane-54d579b49d-kfzzp" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-goldmane--54d579b49d--kfzzp-eth0" Sep 9 04:56:48.797201 containerd[1866]: 2025-09-09 04:56:48.777 [INFO][5165] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0641b3c0086d9b7b2d57d2f93cd5a6cd989c162cd48e73015e5a2f0757e97a9e" Namespace="calico-system" Pod="goldmane-54d579b49d-kfzzp" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-goldmane--54d579b49d--kfzzp-eth0" Sep 9 04:56:48.797201 containerd[1866]: 2025-09-09 04:56:48.780 [INFO][5165] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0641b3c0086d9b7b2d57d2f93cd5a6cd989c162cd48e73015e5a2f0757e97a9e" Namespace="calico-system" Pod="goldmane-54d579b49d-kfzzp" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-goldmane--54d579b49d--kfzzp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--087888047c-k8s-goldmane--54d579b49d--kfzzp-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"c59e2b93-fd42-4540-8fc2-a0a9c55c3504", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 56, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-087888047c", ContainerID:"0641b3c0086d9b7b2d57d2f93cd5a6cd989c162cd48e73015e5a2f0757e97a9e", Pod:"goldmane-54d579b49d-kfzzp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.111.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic2b485130d4", MAC:"0a:94:40:c5:ac:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:56:48.797201 containerd[1866]: 2025-09-09 04:56:48.793 [INFO][5165] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0641b3c0086d9b7b2d57d2f93cd5a6cd989c162cd48e73015e5a2f0757e97a9e" Namespace="calico-system" Pod="goldmane-54d579b49d-kfzzp" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-goldmane--54d579b49d--kfzzp-eth0" Sep 9 04:56:48.864499 containerd[1866]: time="2025-09-09T04:56:48.861703112Z" level=info msg="connecting to shim 0641b3c0086d9b7b2d57d2f93cd5a6cd989c162cd48e73015e5a2f0757e97a9e" address="unix:///run/containerd/s/7b31bbd459b93149f689125be40cab393665007197d307f1f1974299d6c4a21c" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:56:48.900279 systemd[1]: Started cri-containerd-0641b3c0086d9b7b2d57d2f93cd5a6cd989c162cd48e73015e5a2f0757e97a9e.scope - libcontainer container 0641b3c0086d9b7b2d57d2f93cd5a6cd989c162cd48e73015e5a2f0757e97a9e. Sep 9 04:56:48.938563 containerd[1866]: time="2025-09-09T04:56:48.938523459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-kfzzp,Uid:c59e2b93-fd42-4540-8fc2-a0a9c55c3504,Namespace:calico-system,Attempt:0,} returns sandbox id \"0641b3c0086d9b7b2d57d2f93cd5a6cd989c162cd48e73015e5a2f0757e97a9e\"" Sep 9 04:56:49.640281 containerd[1866]: time="2025-09-09T04:56:49.640059532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h2544,Uid:cfb38fab-c883-40c5-a29d-aad239ca3e1a,Namespace:kube-system,Attempt:0,}" Sep 9 04:56:49.640281 containerd[1866]: time="2025-09-09T04:56:49.640205681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-blr4m,Uid:ef65cad1-f38f-4d74-aff6-26558cee563c,Namespace:calico-system,Attempt:0,}" Sep 9 04:56:49.685929 containerd[1866]: time="2025-09-09T04:56:49.685832844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:49.689953 containerd[1866]: time="2025-09-09T04:56:49.689914314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 9 04:56:49.693122 containerd[1866]: time="2025-09-09T04:56:49.693053346Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:49.699529 containerd[1866]: time="2025-09-09T04:56:49.699433238Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:49.699873 containerd[1866]: time="2025-09-09T04:56:49.699847539Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 2.584747347s" Sep 9 04:56:49.699873 containerd[1866]: time="2025-09-09T04:56:49.699874580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 9 04:56:49.701719 containerd[1866]: time="2025-09-09T04:56:49.701618050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 04:56:49.704352 containerd[1866]: time="2025-09-09T04:56:49.704327085Z" level=info msg="CreateContainer within sandbox \"e56d84dd57dd4863b6c662401e80d1bf26316af41921f0aa9e1377ce90e7719a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 04:56:49.745851 containerd[1866]: time="2025-09-09T04:56:49.745771760Z" level=info msg="Container 1d1226d68e6eac44e35a377d5c81a18adace4a67cc653262a4d7e4379a5892e8: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:56:49.770302 containerd[1866]: time="2025-09-09T04:56:49.770260249Z" level=info msg="CreateContainer within sandbox \"e56d84dd57dd4863b6c662401e80d1bf26316af41921f0aa9e1377ce90e7719a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1d1226d68e6eac44e35a377d5c81a18adace4a67cc653262a4d7e4379a5892e8\"" Sep 9 04:56:49.771751 containerd[1866]: time="2025-09-09T04:56:49.771731686Z" level=info msg="StartContainer for \"1d1226d68e6eac44e35a377d5c81a18adace4a67cc653262a4d7e4379a5892e8\"" Sep 9 04:56:49.772655 containerd[1866]: time="2025-09-09T04:56:49.772630354Z" level=info msg="connecting to shim 1d1226d68e6eac44e35a377d5c81a18adace4a67cc653262a4d7e4379a5892e8" address="unix:///run/containerd/s/07c588f84cc92c2d6f6b06310cffc66d46648c7e286ea387fa8077f02be01378" protocol=ttrpc version=3 Sep 9 04:56:49.790891 systemd-networkd[1687]: caliebec926e106: Link UP Sep 9 04:56:49.792760 systemd-networkd[1687]: caliebec926e106: Gained carrier Sep 9 04:56:49.804516 systemd[1]: Started cri-containerd-1d1226d68e6eac44e35a377d5c81a18adace4a67cc653262a4d7e4379a5892e8.scope - libcontainer container 1d1226d68e6eac44e35a377d5c81a18adace4a67cc653262a4d7e4379a5892e8. Sep 9 04:56:49.817368 containerd[1866]: 2025-09-09 04:56:49.699 [INFO][5243] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452.0.0--n--087888047c-k8s-coredns--668d6bf9bc--h2544-eth0 coredns-668d6bf9bc- kube-system cfb38fab-c883-40c5-a29d-aad239ca3e1a 790 0 2025-09-09 04:56:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4452.0.0-n-087888047c coredns-668d6bf9bc-h2544 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliebec926e106 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="28af79bc40241d3e10d94808b61bddcb1750a1adebc08acb746554a789b592c9" Namespace="kube-system" Pod="coredns-668d6bf9bc-h2544" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-coredns--668d6bf9bc--h2544-" Sep 9 04:56:49.817368 containerd[1866]: 2025-09-09 04:56:49.699 [INFO][5243] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28af79bc40241d3e10d94808b61bddcb1750a1adebc08acb746554a789b592c9" Namespace="kube-system" Pod="coredns-668d6bf9bc-h2544" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-coredns--668d6bf9bc--h2544-eth0" Sep 9 04:56:49.817368 containerd[1866]: 2025-09-09 04:56:49.731 [INFO][5269] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28af79bc40241d3e10d94808b61bddcb1750a1adebc08acb746554a789b592c9" HandleID="k8s-pod-network.28af79bc40241d3e10d94808b61bddcb1750a1adebc08acb746554a789b592c9" Workload="ci--4452.0.0--n--087888047c-k8s-coredns--668d6bf9bc--h2544-eth0" Sep 9 04:56:49.817368 containerd[1866]: 2025-09-09 04:56:49.731 [INFO][5269] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="28af79bc40241d3e10d94808b61bddcb1750a1adebc08acb746554a789b592c9" HandleID="k8s-pod-network.28af79bc40241d3e10d94808b61bddcb1750a1adebc08acb746554a789b592c9" Workload="ci--4452.0.0--n--087888047c-k8s-coredns--668d6bf9bc--h2544-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb650), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4452.0.0-n-087888047c", "pod":"coredns-668d6bf9bc-h2544", "timestamp":"2025-09-09 04:56:49.731412022 +0000 UTC"}, Hostname:"ci-4452.0.0-n-087888047c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:56:49.817368 containerd[1866]: 2025-09-09 04:56:49.731 [INFO][5269] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:56:49.817368 containerd[1866]: 2025-09-09 04:56:49.731 [INFO][5269] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:56:49.817368 containerd[1866]: 2025-09-09 04:56:49.731 [INFO][5269] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452.0.0-n-087888047c' Sep 9 04:56:49.817368 containerd[1866]: 2025-09-09 04:56:49.740 [INFO][5269] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.28af79bc40241d3e10d94808b61bddcb1750a1adebc08acb746554a789b592c9" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:49.817368 containerd[1866]: 2025-09-09 04:56:49.749 [INFO][5269] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452.0.0-n-087888047c" Sep 9 04:56:49.817368 containerd[1866]: 2025-09-09 04:56:49.753 [INFO][5269] ipam/ipam.go 511: Trying affinity for 192.168.111.0/26 host="ci-4452.0.0-n-087888047c" Sep 9 04:56:49.817368 containerd[1866]: 2025-09-09 04:56:49.756 [INFO][5269] ipam/ipam.go 158: Attempting to load block cidr=192.168.111.0/26 host="ci-4452.0.0-n-087888047c" Sep 9 04:56:49.817368 containerd[1866]: 2025-09-09 04:56:49.758 [INFO][5269] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.111.0/26 host="ci-4452.0.0-n-087888047c" Sep 9 04:56:49.817368 containerd[1866]: 2025-09-09 04:56:49.758 [INFO][5269] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.111.0/26 handle="k8s-pod-network.28af79bc40241d3e10d94808b61bddcb1750a1adebc08acb746554a789b592c9" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:49.817368 containerd[1866]: 2025-09-09 04:56:49.759 [INFO][5269] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.28af79bc40241d3e10d94808b61bddcb1750a1adebc08acb746554a789b592c9 Sep 9 04:56:49.817368 containerd[1866]: 2025-09-09 04:56:49.767 [INFO][5269] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.111.0/26 handle="k8s-pod-network.28af79bc40241d3e10d94808b61bddcb1750a1adebc08acb746554a789b592c9" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:49.817368 containerd[1866]: 2025-09-09 04:56:49.779 [INFO][5269] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.111.6/26] block=192.168.111.0/26 handle="k8s-pod-network.28af79bc40241d3e10d94808b61bddcb1750a1adebc08acb746554a789b592c9" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:49.817368 containerd[1866]: 2025-09-09 04:56:49.779 [INFO][5269] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.111.6/26] handle="k8s-pod-network.28af79bc40241d3e10d94808b61bddcb1750a1adebc08acb746554a789b592c9" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:49.817368 containerd[1866]: 2025-09-09 04:56:49.779 [INFO][5269] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:56:49.817368 containerd[1866]: 2025-09-09 04:56:49.779 [INFO][5269] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.111.6/26] IPv6=[] ContainerID="28af79bc40241d3e10d94808b61bddcb1750a1adebc08acb746554a789b592c9" HandleID="k8s-pod-network.28af79bc40241d3e10d94808b61bddcb1750a1adebc08acb746554a789b592c9" Workload="ci--4452.0.0--n--087888047c-k8s-coredns--668d6bf9bc--h2544-eth0" Sep 9 04:56:49.819283 containerd[1866]: 2025-09-09 04:56:49.782 [INFO][5243] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28af79bc40241d3e10d94808b61bddcb1750a1adebc08acb746554a789b592c9" Namespace="kube-system" Pod="coredns-668d6bf9bc-h2544" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-coredns--668d6bf9bc--h2544-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--087888047c-k8s-coredns--668d6bf9bc--h2544-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"cfb38fab-c883-40c5-a29d-aad239ca3e1a", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 56, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-087888047c", ContainerID:"", Pod:"coredns-668d6bf9bc-h2544", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliebec926e106", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:56:49.819283 containerd[1866]: 2025-09-09 04:56:49.783 [INFO][5243] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.111.6/32] ContainerID="28af79bc40241d3e10d94808b61bddcb1750a1adebc08acb746554a789b592c9" Namespace="kube-system" Pod="coredns-668d6bf9bc-h2544" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-coredns--668d6bf9bc--h2544-eth0" Sep 9 04:56:49.819283 containerd[1866]: 2025-09-09 04:56:49.783 [INFO][5243] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliebec926e106 ContainerID="28af79bc40241d3e10d94808b61bddcb1750a1adebc08acb746554a789b592c9" Namespace="kube-system" Pod="coredns-668d6bf9bc-h2544" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-coredns--668d6bf9bc--h2544-eth0" Sep 9 04:56:49.819283 containerd[1866]: 2025-09-09 04:56:49.793 [INFO][5243] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28af79bc40241d3e10d94808b61bddcb1750a1adebc08acb746554a789b592c9" Namespace="kube-system" Pod="coredns-668d6bf9bc-h2544" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-coredns--668d6bf9bc--h2544-eth0" Sep 9 04:56:49.819283 containerd[1866]: 2025-09-09 04:56:49.794 [INFO][5243] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28af79bc40241d3e10d94808b61bddcb1750a1adebc08acb746554a789b592c9" Namespace="kube-system" Pod="coredns-668d6bf9bc-h2544" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-coredns--668d6bf9bc--h2544-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--087888047c-k8s-coredns--668d6bf9bc--h2544-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"cfb38fab-c883-40c5-a29d-aad239ca3e1a", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 56, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-087888047c", ContainerID:"28af79bc40241d3e10d94808b61bddcb1750a1adebc08acb746554a789b592c9", Pod:"coredns-668d6bf9bc-h2544", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliebec926e106", MAC:"ca:cc:2d:25:17:1e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:56:49.819283 containerd[1866]: 2025-09-09 04:56:49.814 [INFO][5243] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28af79bc40241d3e10d94808b61bddcb1750a1adebc08acb746554a789b592c9" Namespace="kube-system" Pod="coredns-668d6bf9bc-h2544" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-coredns--668d6bf9bc--h2544-eth0" Sep 9 04:56:49.873751 containerd[1866]: time="2025-09-09T04:56:49.873568146Z" level=info msg="StartContainer for \"1d1226d68e6eac44e35a377d5c81a18adace4a67cc653262a4d7e4379a5892e8\" returns successfully" Sep 9 04:56:49.887225 containerd[1866]: time="2025-09-09T04:56:49.887120235Z" level=info msg="connecting to shim 28af79bc40241d3e10d94808b61bddcb1750a1adebc08acb746554a789b592c9" address="unix:///run/containerd/s/4f32d9da6c2eefa1aed2d375127fbf4141db9b6daed00f0ff7ef71a48752ce50" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:56:49.903205 systemd-networkd[1687]: cali0f0f9ab02ac: Link UP Sep 9 04:56:49.907097 systemd-networkd[1687]: cali0f0f9ab02ac: Gained carrier Sep 9 04:56:49.916233 systemd[1]: Started cri-containerd-28af79bc40241d3e10d94808b61bddcb1750a1adebc08acb746554a789b592c9.scope - libcontainer container 28af79bc40241d3e10d94808b61bddcb1750a1adebc08acb746554a789b592c9. Sep 9 04:56:49.935794 containerd[1866]: 2025-09-09 04:56:49.715 [INFO][5247] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452.0.0--n--087888047c-k8s-csi--node--driver--blr4m-eth0 csi-node-driver- calico-system ef65cad1-f38f-4d74-aff6-26558cee563c 683 0 2025-09-09 04:56:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4452.0.0-n-087888047c csi-node-driver-blr4m eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0f0f9ab02ac [] [] }} ContainerID="63399b67b9b662a5022b60b8a8f83b348fd34bef73202f9d9272c97e7ec14b6a" Namespace="calico-system" Pod="csi-node-driver-blr4m" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-csi--node--driver--blr4m-" Sep 9 04:56:49.935794 containerd[1866]: 2025-09-09 04:56:49.715 [INFO][5247] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="63399b67b9b662a5022b60b8a8f83b348fd34bef73202f9d9272c97e7ec14b6a" Namespace="calico-system" Pod="csi-node-driver-blr4m" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-csi--node--driver--blr4m-eth0" Sep 9 04:56:49.935794 containerd[1866]: 2025-09-09 04:56:49.751 [INFO][5274] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63399b67b9b662a5022b60b8a8f83b348fd34bef73202f9d9272c97e7ec14b6a" HandleID="k8s-pod-network.63399b67b9b662a5022b60b8a8f83b348fd34bef73202f9d9272c97e7ec14b6a" Workload="ci--4452.0.0--n--087888047c-k8s-csi--node--driver--blr4m-eth0" Sep 9 04:56:49.935794 containerd[1866]: 2025-09-09 04:56:49.751 [INFO][5274] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="63399b67b9b662a5022b60b8a8f83b348fd34bef73202f9d9272c97e7ec14b6a" HandleID="k8s-pod-network.63399b67b9b662a5022b60b8a8f83b348fd34bef73202f9d9272c97e7ec14b6a" Workload="ci--4452.0.0--n--087888047c-k8s-csi--node--driver--blr4m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3b70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4452.0.0-n-087888047c", "pod":"csi-node-driver-blr4m", "timestamp":"2025-09-09 04:56:49.751448614 +0000 UTC"}, Hostname:"ci-4452.0.0-n-087888047c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:56:49.935794 containerd[1866]: 2025-09-09 04:56:49.751 [INFO][5274] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:56:49.935794 containerd[1866]: 2025-09-09 04:56:49.779 [INFO][5274] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:56:49.935794 containerd[1866]: 2025-09-09 04:56:49.779 [INFO][5274] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452.0.0-n-087888047c' Sep 9 04:56:49.935794 containerd[1866]: 2025-09-09 04:56:49.841 [INFO][5274] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.63399b67b9b662a5022b60b8a8f83b348fd34bef73202f9d9272c97e7ec14b6a" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:49.935794 containerd[1866]: 2025-09-09 04:56:49.851 [INFO][5274] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452.0.0-n-087888047c" Sep 9 04:56:49.935794 containerd[1866]: 2025-09-09 04:56:49.857 [INFO][5274] ipam/ipam.go 511: Trying affinity for 192.168.111.0/26 host="ci-4452.0.0-n-087888047c" Sep 9 04:56:49.935794 containerd[1866]: 2025-09-09 04:56:49.859 [INFO][5274] ipam/ipam.go 158: Attempting to load block cidr=192.168.111.0/26 host="ci-4452.0.0-n-087888047c" Sep 9 04:56:49.935794 containerd[1866]: 2025-09-09 04:56:49.863 [INFO][5274] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.111.0/26 host="ci-4452.0.0-n-087888047c" Sep 9 04:56:49.935794 containerd[1866]: 2025-09-09 04:56:49.863 [INFO][5274] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.111.0/26 handle="k8s-pod-network.63399b67b9b662a5022b60b8a8f83b348fd34bef73202f9d9272c97e7ec14b6a" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:49.935794 containerd[1866]: 2025-09-09 04:56:49.866 [INFO][5274] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.63399b67b9b662a5022b60b8a8f83b348fd34bef73202f9d9272c97e7ec14b6a Sep 9 04:56:49.935794 containerd[1866]: 2025-09-09 04:56:49.870 [INFO][5274] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.111.0/26 handle="k8s-pod-network.63399b67b9b662a5022b60b8a8f83b348fd34bef73202f9d9272c97e7ec14b6a" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:49.935794 containerd[1866]: 2025-09-09 04:56:49.890 [INFO][5274] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.111.7/26] block=192.168.111.0/26 handle="k8s-pod-network.63399b67b9b662a5022b60b8a8f83b348fd34bef73202f9d9272c97e7ec14b6a" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:49.935794 containerd[1866]: 2025-09-09 04:56:49.890 [INFO][5274] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.111.7/26] handle="k8s-pod-network.63399b67b9b662a5022b60b8a8f83b348fd34bef73202f9d9272c97e7ec14b6a" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:49.935794 containerd[1866]: 2025-09-09 04:56:49.890 [INFO][5274] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:56:49.935794 containerd[1866]: 2025-09-09 04:56:49.890 [INFO][5274] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.111.7/26] IPv6=[] ContainerID="63399b67b9b662a5022b60b8a8f83b348fd34bef73202f9d9272c97e7ec14b6a" HandleID="k8s-pod-network.63399b67b9b662a5022b60b8a8f83b348fd34bef73202f9d9272c97e7ec14b6a" Workload="ci--4452.0.0--n--087888047c-k8s-csi--node--driver--blr4m-eth0" Sep 9 04:56:49.937378 containerd[1866]: 2025-09-09 04:56:49.895 [INFO][5247] cni-plugin/k8s.go 418: Populated endpoint ContainerID="63399b67b9b662a5022b60b8a8f83b348fd34bef73202f9d9272c97e7ec14b6a" Namespace="calico-system" Pod="csi-node-driver-blr4m" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-csi--node--driver--blr4m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--087888047c-k8s-csi--node--driver--blr4m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ef65cad1-f38f-4d74-aff6-26558cee563c", ResourceVersion:"683", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 56, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-087888047c", ContainerID:"", Pod:"csi-node-driver-blr4m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.111.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0f0f9ab02ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:56:49.937378 containerd[1866]: 2025-09-09 04:56:49.895 [INFO][5247] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.111.7/32] ContainerID="63399b67b9b662a5022b60b8a8f83b348fd34bef73202f9d9272c97e7ec14b6a" Namespace="calico-system" Pod="csi-node-driver-blr4m" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-csi--node--driver--blr4m-eth0" Sep 9 04:56:49.937378 containerd[1866]: 2025-09-09 04:56:49.895 [INFO][5247] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f0f9ab02ac ContainerID="63399b67b9b662a5022b60b8a8f83b348fd34bef73202f9d9272c97e7ec14b6a" Namespace="calico-system" Pod="csi-node-driver-blr4m" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-csi--node--driver--blr4m-eth0" Sep 9 04:56:49.937378 containerd[1866]: 2025-09-09 04:56:49.908 [INFO][5247] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63399b67b9b662a5022b60b8a8f83b348fd34bef73202f9d9272c97e7ec14b6a" Namespace="calico-system" Pod="csi-node-driver-blr4m" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-csi--node--driver--blr4m-eth0" Sep 9 04:56:49.937378 containerd[1866]: 2025-09-09 04:56:49.912 [INFO][5247] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="63399b67b9b662a5022b60b8a8f83b348fd34bef73202f9d9272c97e7ec14b6a" Namespace="calico-system" Pod="csi-node-driver-blr4m" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-csi--node--driver--blr4m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--087888047c-k8s-csi--node--driver--blr4m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ef65cad1-f38f-4d74-aff6-26558cee563c", ResourceVersion:"683", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 56, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-087888047c", ContainerID:"63399b67b9b662a5022b60b8a8f83b348fd34bef73202f9d9272c97e7ec14b6a", Pod:"csi-node-driver-blr4m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.111.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0f0f9ab02ac", MAC:"76:24:16:c3:74:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:56:49.937378 containerd[1866]: 2025-09-09 04:56:49.930 [INFO][5247] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="63399b67b9b662a5022b60b8a8f83b348fd34bef73202f9d9272c97e7ec14b6a" Namespace="calico-system" Pod="csi-node-driver-blr4m" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-csi--node--driver--blr4m-eth0" Sep 9 04:56:49.977491 containerd[1866]: time="2025-09-09T04:56:49.977455822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h2544,Uid:cfb38fab-c883-40c5-a29d-aad239ca3e1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"28af79bc40241d3e10d94808b61bddcb1750a1adebc08acb746554a789b592c9\"" Sep 9 04:56:49.980212 containerd[1866]: time="2025-09-09T04:56:49.980182921Z" level=info msg="CreateContainer within sandbox \"28af79bc40241d3e10d94808b61bddcb1750a1adebc08acb746554a789b592c9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 04:56:49.998789 containerd[1866]: time="2025-09-09T04:56:49.998752037Z" level=info msg="connecting to shim 63399b67b9b662a5022b60b8a8f83b348fd34bef73202f9d9272c97e7ec14b6a" address="unix:///run/containerd/s/f52c5ac517ebac941e4d8d83bf63271f6afaaf1d72c69d7e807058f9719b88ee" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:56:50.008545 containerd[1866]: time="2025-09-09T04:56:50.007955832Z" level=info msg="Container 0dabefa58affd7877c98939e58c28b9e89b03b29a8f19e64928b3dcbe5eb9340: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:56:50.027136 systemd[1]: Started cri-containerd-63399b67b9b662a5022b60b8a8f83b348fd34bef73202f9d9272c97e7ec14b6a.scope - libcontainer container 63399b67b9b662a5022b60b8a8f83b348fd34bef73202f9d9272c97e7ec14b6a. Sep 9 04:56:50.030614 containerd[1866]: time="2025-09-09T04:56:50.030583408Z" level=info msg="CreateContainer within sandbox \"28af79bc40241d3e10d94808b61bddcb1750a1adebc08acb746554a789b592c9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0dabefa58affd7877c98939e58c28b9e89b03b29a8f19e64928b3dcbe5eb9340\"" Sep 9 04:56:50.031810 containerd[1866]: time="2025-09-09T04:56:50.031629184Z" level=info msg="StartContainer for \"0dabefa58affd7877c98939e58c28b9e89b03b29a8f19e64928b3dcbe5eb9340\"" Sep 9 04:56:50.034315 containerd[1866]: time="2025-09-09T04:56:50.034273009Z" level=info msg="connecting to shim 0dabefa58affd7877c98939e58c28b9e89b03b29a8f19e64928b3dcbe5eb9340" address="unix:///run/containerd/s/4f32d9da6c2eefa1aed2d375127fbf4141db9b6daed00f0ff7ef71a48752ce50" protocol=ttrpc version=3 Sep 9 04:56:50.056115 systemd[1]: Started cri-containerd-0dabefa58affd7877c98939e58c28b9e89b03b29a8f19e64928b3dcbe5eb9340.scope - libcontainer container 0dabefa58affd7877c98939e58c28b9e89b03b29a8f19e64928b3dcbe5eb9340. Sep 9 04:56:50.089054 containerd[1866]: time="2025-09-09T04:56:50.088812031Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:50.092339 containerd[1866]: time="2025-09-09T04:56:50.092231352Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 04:56:50.097901 containerd[1866]: time="2025-09-09T04:56:50.097875381Z" level=info msg="StartContainer for \"0dabefa58affd7877c98939e58c28b9e89b03b29a8f19e64928b3dcbe5eb9340\" returns successfully" Sep 9 04:56:50.099912 containerd[1866]: time="2025-09-09T04:56:50.099882107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-blr4m,Uid:ef65cad1-f38f-4d74-aff6-26558cee563c,Namespace:calico-system,Attempt:0,} returns sandbox id \"63399b67b9b662a5022b60b8a8f83b348fd34bef73202f9d9272c97e7ec14b6a\"" Sep 9 04:56:50.100669 containerd[1866]: time="2025-09-09T04:56:50.100642482Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 398.98856ms" Sep 9 04:56:50.100669 containerd[1866]: time="2025-09-09T04:56:50.100669211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 9 04:56:50.102999 containerd[1866]: time="2025-09-09T04:56:50.102972738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 9 04:56:50.104866 containerd[1866]: time="2025-09-09T04:56:50.104841652Z" level=info msg="CreateContainer within sandbox \"bf4edd1830cee4582382cdbd65f955c683e062faa88ef6bc5ff69bf537f5c4f2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 04:56:50.117123 systemd-networkd[1687]: calic2b485130d4: Gained IPv6LL Sep 9 04:56:50.129085 containerd[1866]: time="2025-09-09T04:56:50.129010403Z" level=info msg="Container e3e32c9c65116ed035e7a8ac52ce067d42c9f3a058c01f6153ed4ea3ce3cf5ca: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:56:50.152477 containerd[1866]: time="2025-09-09T04:56:50.152439988Z" level=info msg="CreateContainer within sandbox \"bf4edd1830cee4582382cdbd65f955c683e062faa88ef6bc5ff69bf537f5c4f2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e3e32c9c65116ed035e7a8ac52ce067d42c9f3a058c01f6153ed4ea3ce3cf5ca\"" Sep 9 04:56:50.153194 containerd[1866]: time="2025-09-09T04:56:50.153171386Z" level=info msg="StartContainer for \"e3e32c9c65116ed035e7a8ac52ce067d42c9f3a058c01f6153ed4ea3ce3cf5ca\"" Sep 9 04:56:50.154913 containerd[1866]: time="2025-09-09T04:56:50.154800004Z" level=info msg="connecting to shim e3e32c9c65116ed035e7a8ac52ce067d42c9f3a058c01f6153ed4ea3ce3cf5ca" address="unix:///run/containerd/s/4c3c32cfad580457e885f213943f4a21c5464a80e58987af00188941ff513d1a" protocol=ttrpc version=3 Sep 9 04:56:50.175537 systemd[1]: Started cri-containerd-e3e32c9c65116ed035e7a8ac52ce067d42c9f3a058c01f6153ed4ea3ce3cf5ca.scope - libcontainer container e3e32c9c65116ed035e7a8ac52ce067d42c9f3a058c01f6153ed4ea3ce3cf5ca. Sep 9 04:56:50.227100 containerd[1866]: time="2025-09-09T04:56:50.227067651Z" level=info msg="StartContainer for \"e3e32c9c65116ed035e7a8ac52ce067d42c9f3a058c01f6153ed4ea3ce3cf5ca\" returns successfully" Sep 9 04:56:50.640104 containerd[1866]: time="2025-09-09T04:56:50.640057145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54d5b6bfc9-zsk7h,Uid:57dfc688-a4c4-4ed5-b76a-fb63bc00545e,Namespace:calico-system,Attempt:0,}" Sep 9 04:56:50.812912 kubelet[3383]: I0909 04:56:50.812447 3383 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-h2544" podStartSLOduration=38.812429839 podStartE2EDuration="38.812429839s" podCreationTimestamp="2025-09-09 04:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 04:56:50.812020418 +0000 UTC m=+44.318971367" watchObservedRunningTime="2025-09-09 04:56:50.812429839 +0000 UTC m=+44.319380780" Sep 9 04:56:50.854298 kubelet[3383]: I0909 04:56:50.845108 3383 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-646f77f6b4-tb4sw" podStartSLOduration=26.258101219 podStartE2EDuration="28.845092115s" podCreationTimestamp="2025-09-09 04:56:22 +0000 UTC" firstStartedPulling="2025-09-09 04:56:47.113830369 +0000 UTC m=+40.620781310" lastFinishedPulling="2025-09-09 04:56:49.700821265 +0000 UTC m=+43.207772206" observedRunningTime="2025-09-09 04:56:50.829547941 +0000 UTC m=+44.336498938" watchObservedRunningTime="2025-09-09 04:56:50.845092115 +0000 UTC m=+44.352043056" Sep 9 04:56:51.155751 kubelet[3383]: I0909 04:56:50.905265 3383 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-646f77f6b4-ml9lp" podStartSLOduration=26.008442061 podStartE2EDuration="28.905247374s" podCreationTimestamp="2025-09-09 04:56:22 +0000 UTC" firstStartedPulling="2025-09-09 04:56:47.20502515 +0000 UTC m=+40.711976091" lastFinishedPulling="2025-09-09 04:56:50.101830455 +0000 UTC m=+43.608781404" observedRunningTime="2025-09-09 04:56:50.865505319 +0000 UTC m=+44.372456260" watchObservedRunningTime="2025-09-09 04:56:50.905247374 +0000 UTC m=+44.412198315" Sep 9 04:56:50.949184 systemd-networkd[1687]: caliebec926e106: Gained IPv6LL Sep 9 04:56:51.013105 systemd-networkd[1687]: cali0f0f9ab02ac: Gained IPv6LL Sep 9 04:56:51.242338 systemd-networkd[1687]: cali189f3264b8e: Link UP Sep 9 04:56:51.243409 systemd-networkd[1687]: cali189f3264b8e: Gained carrier Sep 9 04:56:51.264730 containerd[1866]: 2025-09-09 04:56:51.002 [INFO][5503] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452.0.0--n--087888047c-k8s-calico--kube--controllers--54d5b6bfc9--zsk7h-eth0 calico-kube-controllers-54d5b6bfc9- calico-system 57dfc688-a4c4-4ed5-b76a-fb63bc00545e 787 0 2025-09-09 04:56:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:54d5b6bfc9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4452.0.0-n-087888047c calico-kube-controllers-54d5b6bfc9-zsk7h eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali189f3264b8e [] [] }} ContainerID="3886d6ebb476b3c41736464ba279ef23e42cd1d2cd116da6241f38432987f267" Namespace="calico-system" Pod="calico-kube-controllers-54d5b6bfc9-zsk7h" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-calico--kube--controllers--54d5b6bfc9--zsk7h-" Sep 9 04:56:51.264730 containerd[1866]: 2025-09-09 04:56:51.155 [INFO][5503] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3886d6ebb476b3c41736464ba279ef23e42cd1d2cd116da6241f38432987f267" Namespace="calico-system" Pod="calico-kube-controllers-54d5b6bfc9-zsk7h" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-calico--kube--controllers--54d5b6bfc9--zsk7h-eth0" Sep 9 04:56:51.264730 containerd[1866]: 2025-09-09 04:56:51.182 [INFO][5515] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3886d6ebb476b3c41736464ba279ef23e42cd1d2cd116da6241f38432987f267" HandleID="k8s-pod-network.3886d6ebb476b3c41736464ba279ef23e42cd1d2cd116da6241f38432987f267" Workload="ci--4452.0.0--n--087888047c-k8s-calico--kube--controllers--54d5b6bfc9--zsk7h-eth0" Sep 9 04:56:51.264730 containerd[1866]: 2025-09-09 04:56:51.183 [INFO][5515] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3886d6ebb476b3c41736464ba279ef23e42cd1d2cd116da6241f38432987f267" HandleID="k8s-pod-network.3886d6ebb476b3c41736464ba279ef23e42cd1d2cd116da6241f38432987f267" Workload="ci--4452.0.0--n--087888047c-k8s-calico--kube--controllers--54d5b6bfc9--zsk7h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb730), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4452.0.0-n-087888047c", "pod":"calico-kube-controllers-54d5b6bfc9-zsk7h", "timestamp":"2025-09-09 04:56:51.182833511 +0000 UTC"}, Hostname:"ci-4452.0.0-n-087888047c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 04:56:51.264730 containerd[1866]: 2025-09-09 04:56:51.184 [INFO][5515] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 04:56:51.264730 containerd[1866]: 2025-09-09 04:56:51.184 [INFO][5515] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 04:56:51.264730 containerd[1866]: 2025-09-09 04:56:51.184 [INFO][5515] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452.0.0-n-087888047c' Sep 9 04:56:51.264730 containerd[1866]: 2025-09-09 04:56:51.189 [INFO][5515] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3886d6ebb476b3c41736464ba279ef23e42cd1d2cd116da6241f38432987f267" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:51.264730 containerd[1866]: 2025-09-09 04:56:51.195 [INFO][5515] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452.0.0-n-087888047c" Sep 9 04:56:51.264730 containerd[1866]: 2025-09-09 04:56:51.205 [INFO][5515] ipam/ipam.go 511: Trying affinity for 192.168.111.0/26 host="ci-4452.0.0-n-087888047c" Sep 9 04:56:51.264730 containerd[1866]: 2025-09-09 04:56:51.208 [INFO][5515] ipam/ipam.go 158: Attempting to load block cidr=192.168.111.0/26 host="ci-4452.0.0-n-087888047c" Sep 9 04:56:51.264730 containerd[1866]: 2025-09-09 04:56:51.212 [INFO][5515] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.111.0/26 host="ci-4452.0.0-n-087888047c" Sep 9 04:56:51.264730 containerd[1866]: 2025-09-09 04:56:51.212 [INFO][5515] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.111.0/26 handle="k8s-pod-network.3886d6ebb476b3c41736464ba279ef23e42cd1d2cd116da6241f38432987f267" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:51.264730 containerd[1866]: 2025-09-09 04:56:51.214 [INFO][5515] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3886d6ebb476b3c41736464ba279ef23e42cd1d2cd116da6241f38432987f267 Sep 9 04:56:51.264730 containerd[1866]: 2025-09-09 04:56:51.222 [INFO][5515] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.111.0/26 handle="k8s-pod-network.3886d6ebb476b3c41736464ba279ef23e42cd1d2cd116da6241f38432987f267" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:51.264730 containerd[1866]: 2025-09-09 04:56:51.235 [INFO][5515] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.111.8/26] block=192.168.111.0/26 handle="k8s-pod-network.3886d6ebb476b3c41736464ba279ef23e42cd1d2cd116da6241f38432987f267" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:51.264730 containerd[1866]: 2025-09-09 04:56:51.235 [INFO][5515] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.111.8/26] handle="k8s-pod-network.3886d6ebb476b3c41736464ba279ef23e42cd1d2cd116da6241f38432987f267" host="ci-4452.0.0-n-087888047c" Sep 9 04:56:51.264730 containerd[1866]: 2025-09-09 04:56:51.235 [INFO][5515] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 04:56:51.264730 containerd[1866]: 2025-09-09 04:56:51.235 [INFO][5515] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.111.8/26] IPv6=[] ContainerID="3886d6ebb476b3c41736464ba279ef23e42cd1d2cd116da6241f38432987f267" HandleID="k8s-pod-network.3886d6ebb476b3c41736464ba279ef23e42cd1d2cd116da6241f38432987f267" Workload="ci--4452.0.0--n--087888047c-k8s-calico--kube--controllers--54d5b6bfc9--zsk7h-eth0" Sep 9 04:56:51.266258 containerd[1866]: 2025-09-09 04:56:51.237 [INFO][5503] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3886d6ebb476b3c41736464ba279ef23e42cd1d2cd116da6241f38432987f267" Namespace="calico-system" Pod="calico-kube-controllers-54d5b6bfc9-zsk7h" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-calico--kube--controllers--54d5b6bfc9--zsk7h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--087888047c-k8s-calico--kube--controllers--54d5b6bfc9--zsk7h-eth0", GenerateName:"calico-kube-controllers-54d5b6bfc9-", Namespace:"calico-system", SelfLink:"", UID:"57dfc688-a4c4-4ed5-b76a-fb63bc00545e", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 56, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54d5b6bfc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-087888047c", ContainerID:"", Pod:"calico-kube-controllers-54d5b6bfc9-zsk7h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.111.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali189f3264b8e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:56:51.266258 containerd[1866]: 2025-09-09 04:56:51.237 [INFO][5503] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.111.8/32] ContainerID="3886d6ebb476b3c41736464ba279ef23e42cd1d2cd116da6241f38432987f267" Namespace="calico-system" Pod="calico-kube-controllers-54d5b6bfc9-zsk7h" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-calico--kube--controllers--54d5b6bfc9--zsk7h-eth0" Sep 9 04:56:51.266258 containerd[1866]: 2025-09-09 04:56:51.237 [INFO][5503] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali189f3264b8e ContainerID="3886d6ebb476b3c41736464ba279ef23e42cd1d2cd116da6241f38432987f267" Namespace="calico-system" Pod="calico-kube-controllers-54d5b6bfc9-zsk7h" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-calico--kube--controllers--54d5b6bfc9--zsk7h-eth0" Sep 9 04:56:51.266258 containerd[1866]: 2025-09-09 04:56:51.244 [INFO][5503] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3886d6ebb476b3c41736464ba279ef23e42cd1d2cd116da6241f38432987f267" Namespace="calico-system" Pod="calico-kube-controllers-54d5b6bfc9-zsk7h" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-calico--kube--controllers--54d5b6bfc9--zsk7h-eth0" Sep 9 04:56:51.266258 containerd[1866]: 2025-09-09 04:56:51.245 [INFO][5503] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3886d6ebb476b3c41736464ba279ef23e42cd1d2cd116da6241f38432987f267" Namespace="calico-system" Pod="calico-kube-controllers-54d5b6bfc9-zsk7h" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-calico--kube--controllers--54d5b6bfc9--zsk7h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--087888047c-k8s-calico--kube--controllers--54d5b6bfc9--zsk7h-eth0", GenerateName:"calico-kube-controllers-54d5b6bfc9-", Namespace:"calico-system", SelfLink:"", UID:"57dfc688-a4c4-4ed5-b76a-fb63bc00545e", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 4, 56, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54d5b6bfc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-087888047c", ContainerID:"3886d6ebb476b3c41736464ba279ef23e42cd1d2cd116da6241f38432987f267", Pod:"calico-kube-controllers-54d5b6bfc9-zsk7h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.111.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali189f3264b8e", MAC:"7a:04:49:04:7f:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 04:56:51.266258 containerd[1866]: 2025-09-09 04:56:51.260 [INFO][5503] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3886d6ebb476b3c41736464ba279ef23e42cd1d2cd116da6241f38432987f267" Namespace="calico-system" Pod="calico-kube-controllers-54d5b6bfc9-zsk7h" WorkloadEndpoint="ci--4452.0.0--n--087888047c-k8s-calico--kube--controllers--54d5b6bfc9--zsk7h-eth0" Sep 9 04:56:51.320713 containerd[1866]: time="2025-09-09T04:56:51.320613693Z" level=info msg="connecting to shim 3886d6ebb476b3c41736464ba279ef23e42cd1d2cd116da6241f38432987f267" address="unix:///run/containerd/s/21a9b8bef8caa92d464a996bc0619dc2865fd8eba899adcc9ae0b1b2fbec8722" namespace=k8s.io protocol=ttrpc version=3 Sep 9 04:56:51.345166 systemd[1]: Started cri-containerd-3886d6ebb476b3c41736464ba279ef23e42cd1d2cd116da6241f38432987f267.scope - libcontainer container 3886d6ebb476b3c41736464ba279ef23e42cd1d2cd116da6241f38432987f267. Sep 9 04:56:51.621080 containerd[1866]: time="2025-09-09T04:56:51.621042077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54d5b6bfc9-zsk7h,Uid:57dfc688-a4c4-4ed5-b76a-fb63bc00545e,Namespace:calico-system,Attempt:0,} returns sandbox id \"3886d6ebb476b3c41736464ba279ef23e42cd1d2cd116da6241f38432987f267\"" Sep 9 04:56:51.802955 kubelet[3383]: I0909 04:56:51.802926 3383 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 04:56:51.803338 kubelet[3383]: I0909 04:56:51.803318 3383 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 04:56:53.153668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3430719088.mount: Deactivated successfully. Sep 9 04:56:53.189138 systemd-networkd[1687]: cali189f3264b8e: Gained IPv6LL Sep 9 04:56:53.572835 containerd[1866]: time="2025-09-09T04:56:53.572717603Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:53.577978 containerd[1866]: time="2025-09-09T04:56:53.577928825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 9 04:56:53.582102 containerd[1866]: time="2025-09-09T04:56:53.582076820Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:53.587032 containerd[1866]: time="2025-09-09T04:56:53.586832757Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:53.587443 containerd[1866]: time="2025-09-09T04:56:53.587416179Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 3.48409559s" Sep 9 04:56:53.587535 containerd[1866]: time="2025-09-09T04:56:53.587520852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 9 04:56:53.588628 containerd[1866]: time="2025-09-09T04:56:53.588479118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 04:56:53.589886 containerd[1866]: time="2025-09-09T04:56:53.589706155Z" level=info msg="CreateContainer within sandbox \"0641b3c0086d9b7b2d57d2f93cd5a6cd989c162cd48e73015e5a2f0757e97a9e\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 9 04:56:53.620142 containerd[1866]: time="2025-09-09T04:56:53.620119493Z" level=info msg="Container 7ab369aad34ea978c5361fe21e2660e1fbd965e5cbb965e67fc88c243aca2430: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:56:53.641241 containerd[1866]: time="2025-09-09T04:56:53.641209247Z" level=info msg="CreateContainer within sandbox \"0641b3c0086d9b7b2d57d2f93cd5a6cd989c162cd48e73015e5a2f0757e97a9e\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"7ab369aad34ea978c5361fe21e2660e1fbd965e5cbb965e67fc88c243aca2430\"" Sep 9 04:56:53.641841 containerd[1866]: time="2025-09-09T04:56:53.641668324Z" level=info msg="StartContainer for \"7ab369aad34ea978c5361fe21e2660e1fbd965e5cbb965e67fc88c243aca2430\"" Sep 9 04:56:53.642858 containerd[1866]: time="2025-09-09T04:56:53.642796431Z" level=info msg="connecting to shim 7ab369aad34ea978c5361fe21e2660e1fbd965e5cbb965e67fc88c243aca2430" address="unix:///run/containerd/s/7b31bbd459b93149f689125be40cab393665007197d307f1f1974299d6c4a21c" protocol=ttrpc version=3 Sep 9 04:56:53.674109 systemd[1]: Started cri-containerd-7ab369aad34ea978c5361fe21e2660e1fbd965e5cbb965e67fc88c243aca2430.scope - libcontainer container 7ab369aad34ea978c5361fe21e2660e1fbd965e5cbb965e67fc88c243aca2430. Sep 9 04:56:53.715249 containerd[1866]: time="2025-09-09T04:56:53.715179524Z" level=info msg="StartContainer for \"7ab369aad34ea978c5361fe21e2660e1fbd965e5cbb965e67fc88c243aca2430\" returns successfully" Sep 9 04:56:53.898351 containerd[1866]: time="2025-09-09T04:56:53.898309016Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7ab369aad34ea978c5361fe21e2660e1fbd965e5cbb965e67fc88c243aca2430\" id:\"199fb497fd4185736d6d0d2382c3d0752c246a673dc4d08cd4f2ca6011e51731\" pid:5637 exit_status:1 exited_at:{seconds:1757393813 nanos:897750747}" Sep 9 04:56:54.878829 containerd[1866]: time="2025-09-09T04:56:54.878788223Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7ab369aad34ea978c5361fe21e2660e1fbd965e5cbb965e67fc88c243aca2430\" id:\"b656cfe6b58000b4e1125726e87803d854c860150bac7f06d9f2f39e2c641bc3\" pid:5661 exit_status:1 exited_at:{seconds:1757393814 nanos:878245977}" Sep 9 04:56:55.004023 containerd[1866]: time="2025-09-09T04:56:55.003738418Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:55.010665 containerd[1866]: time="2025-09-09T04:56:55.010632537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 9 04:56:55.014073 containerd[1866]: time="2025-09-09T04:56:55.014026452Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:55.018798 containerd[1866]: time="2025-09-09T04:56:55.018757405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:55.019319 containerd[1866]: time="2025-09-09T04:56:55.019032712Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.430309327s" Sep 9 04:56:55.019319 containerd[1866]: time="2025-09-09T04:56:55.019059280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 9 04:56:55.020094 containerd[1866]: time="2025-09-09T04:56:55.020071739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 9 04:56:55.021477 containerd[1866]: time="2025-09-09T04:56:55.021451345Z" level=info msg="CreateContainer within sandbox \"63399b67b9b662a5022b60b8a8f83b348fd34bef73202f9d9272c97e7ec14b6a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 04:56:55.047023 containerd[1866]: time="2025-09-09T04:56:55.044893726Z" level=info msg="Container 47fd15ac8d205296db7f326cb8fa4314ead4d7bcf827cf854bb0013b64513398: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:56:55.067866 containerd[1866]: time="2025-09-09T04:56:55.067828343Z" level=info msg="CreateContainer within sandbox \"63399b67b9b662a5022b60b8a8f83b348fd34bef73202f9d9272c97e7ec14b6a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"47fd15ac8d205296db7f326cb8fa4314ead4d7bcf827cf854bb0013b64513398\"" Sep 9 04:56:55.068336 containerd[1866]: time="2025-09-09T04:56:55.068314727Z" level=info msg="StartContainer for \"47fd15ac8d205296db7f326cb8fa4314ead4d7bcf827cf854bb0013b64513398\"" Sep 9 04:56:55.069769 containerd[1866]: time="2025-09-09T04:56:55.069742580Z" level=info msg="connecting to shim 47fd15ac8d205296db7f326cb8fa4314ead4d7bcf827cf854bb0013b64513398" address="unix:///run/containerd/s/f52c5ac517ebac941e4d8d83bf63271f6afaaf1d72c69d7e807058f9719b88ee" protocol=ttrpc version=3 Sep 9 04:56:55.087117 systemd[1]: Started cri-containerd-47fd15ac8d205296db7f326cb8fa4314ead4d7bcf827cf854bb0013b64513398.scope - libcontainer container 47fd15ac8d205296db7f326cb8fa4314ead4d7bcf827cf854bb0013b64513398. Sep 9 04:56:55.115642 containerd[1866]: time="2025-09-09T04:56:55.115605319Z" level=info msg="StartContainer for \"47fd15ac8d205296db7f326cb8fa4314ead4d7bcf827cf854bb0013b64513398\" returns successfully" Sep 9 04:56:58.244031 containerd[1866]: time="2025-09-09T04:56:58.243800697Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:58.247134 containerd[1866]: time="2025-09-09T04:56:58.246996709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 9 04:56:58.250662 containerd[1866]: time="2025-09-09T04:56:58.250634808Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:58.255305 containerd[1866]: time="2025-09-09T04:56:58.255262169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:56:58.255829 containerd[1866]: time="2025-09-09T04:56:58.255527170Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 3.235431047s" Sep 9 04:56:58.255829 containerd[1866]: time="2025-09-09T04:56:58.255555547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 9 04:56:58.256353 containerd[1866]: time="2025-09-09T04:56:58.256335651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 04:56:58.268019 containerd[1866]: time="2025-09-09T04:56:58.267106654Z" level=info msg="CreateContainer within sandbox \"3886d6ebb476b3c41736464ba279ef23e42cd1d2cd116da6241f38432987f267\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 9 04:56:58.302060 containerd[1866]: time="2025-09-09T04:56:58.302028385Z" level=info msg="Container b46dfaa2dd53bff271aa9265db6ab46abb3bf3db4b576273b63aa3682158fd6d: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:56:58.304365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2183249143.mount: Deactivated successfully. Sep 9 04:56:58.321607 containerd[1866]: time="2025-09-09T04:56:58.321569600Z" level=info msg="CreateContainer within sandbox \"3886d6ebb476b3c41736464ba279ef23e42cd1d2cd116da6241f38432987f267\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b46dfaa2dd53bff271aa9265db6ab46abb3bf3db4b576273b63aa3682158fd6d\"" Sep 9 04:56:58.322213 containerd[1866]: time="2025-09-09T04:56:58.322193164Z" level=info msg="StartContainer for \"b46dfaa2dd53bff271aa9265db6ab46abb3bf3db4b576273b63aa3682158fd6d\"" Sep 9 04:56:58.323274 containerd[1866]: time="2025-09-09T04:56:58.323235868Z" level=info msg="connecting to shim b46dfaa2dd53bff271aa9265db6ab46abb3bf3db4b576273b63aa3682158fd6d" address="unix:///run/containerd/s/21a9b8bef8caa92d464a996bc0619dc2865fd8eba899adcc9ae0b1b2fbec8722" protocol=ttrpc version=3 Sep 9 04:56:58.345112 systemd[1]: Started cri-containerd-b46dfaa2dd53bff271aa9265db6ab46abb3bf3db4b576273b63aa3682158fd6d.scope - libcontainer container b46dfaa2dd53bff271aa9265db6ab46abb3bf3db4b576273b63aa3682158fd6d. Sep 9 04:56:58.383835 containerd[1866]: time="2025-09-09T04:56:58.383737652Z" level=info msg="StartContainer for \"b46dfaa2dd53bff271aa9265db6ab46abb3bf3db4b576273b63aa3682158fd6d\" returns successfully" Sep 9 04:56:58.843165 kubelet[3383]: I0909 04:56:58.842712 3383 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-54d5b6bfc9-zsk7h" podStartSLOduration=27.208621814 podStartE2EDuration="33.842695374s" podCreationTimestamp="2025-09-09 04:56:25 +0000 UTC" firstStartedPulling="2025-09-09 04:56:51.622189217 +0000 UTC m=+45.129140158" lastFinishedPulling="2025-09-09 04:56:58.256262777 +0000 UTC m=+51.763213718" observedRunningTime="2025-09-09 04:56:58.840997769 +0000 UTC m=+52.347948710" watchObservedRunningTime="2025-09-09 04:56:58.842695374 +0000 UTC m=+52.349646323" Sep 9 04:56:58.844300 kubelet[3383]: I0909 04:56:58.843875 3383 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-kfzzp" podStartSLOduration=29.195878437 podStartE2EDuration="33.843865435s" podCreationTimestamp="2025-09-09 04:56:25 +0000 UTC" firstStartedPulling="2025-09-09 04:56:48.940159349 +0000 UTC m=+42.447110290" lastFinishedPulling="2025-09-09 04:56:53.588146339 +0000 UTC m=+47.095097288" observedRunningTime="2025-09-09 04:56:53.83418869 +0000 UTC m=+47.341139631" watchObservedRunningTime="2025-09-09 04:56:58.843865435 +0000 UTC m=+52.350816384" Sep 9 04:56:58.929737 containerd[1866]: time="2025-09-09T04:56:58.929681592Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b46dfaa2dd53bff271aa9265db6ab46abb3bf3db4b576273b63aa3682158fd6d\" id:\"d9c224dbf990dcd32c631f272c36a97e529782c47e9e4fefc5956c66b5a43e27\" pid:5775 exit_status:1 exited_at:{seconds:1757393818 nanos:923254149}" Sep 9 04:56:59.859595 containerd[1866]: time="2025-09-09T04:56:59.859196376Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b46dfaa2dd53bff271aa9265db6ab46abb3bf3db4b576273b63aa3682158fd6d\" id:\"07d8073c5b525d9409bcb3829ded98717a441dadf411c35b9d4e0f4a89ee584f\" pid:5796 exited_at:{seconds:1757393819 nanos:858181984}" Sep 9 04:57:00.068037 containerd[1866]: time="2025-09-09T04:57:00.067835325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:57:00.071307 containerd[1866]: time="2025-09-09T04:57:00.071177791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 9 04:57:00.075174 containerd[1866]: time="2025-09-09T04:57:00.075125203Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:57:00.080304 containerd[1866]: time="2025-09-09T04:57:00.080250628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 04:57:00.080860 containerd[1866]: time="2025-09-09T04:57:00.080590951Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.824139312s" Sep 9 04:57:00.080860 containerd[1866]: time="2025-09-09T04:57:00.080619784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 9 04:57:00.083500 containerd[1866]: time="2025-09-09T04:57:00.083474449Z" level=info msg="CreateContainer within sandbox \"63399b67b9b662a5022b60b8a8f83b348fd34bef73202f9d9272c97e7ec14b6a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 04:57:00.113143 containerd[1866]: time="2025-09-09T04:57:00.110947322Z" level=info msg="Container 28fd58fbc6280a1ee667c4e9279d5ab053960b34179bb328357c2078c212892d: CDI devices from CRI Config.CDIDevices: []" Sep 9 04:57:00.133380 containerd[1866]: time="2025-09-09T04:57:00.133344515Z" level=info msg="CreateContainer within sandbox \"63399b67b9b662a5022b60b8a8f83b348fd34bef73202f9d9272c97e7ec14b6a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"28fd58fbc6280a1ee667c4e9279d5ab053960b34179bb328357c2078c212892d\"" Sep 9 04:57:00.136036 containerd[1866]: time="2025-09-09T04:57:00.134735327Z" level=info msg="StartContainer for \"28fd58fbc6280a1ee667c4e9279d5ab053960b34179bb328357c2078c212892d\"" Sep 9 04:57:00.136489 containerd[1866]: time="2025-09-09T04:57:00.136465397Z" level=info msg="connecting to shim 28fd58fbc6280a1ee667c4e9279d5ab053960b34179bb328357c2078c212892d" address="unix:///run/containerd/s/f52c5ac517ebac941e4d8d83bf63271f6afaaf1d72c69d7e807058f9719b88ee" protocol=ttrpc version=3 Sep 9 04:57:00.155112 systemd[1]: Started cri-containerd-28fd58fbc6280a1ee667c4e9279d5ab053960b34179bb328357c2078c212892d.scope - libcontainer container 28fd58fbc6280a1ee667c4e9279d5ab053960b34179bb328357c2078c212892d. Sep 9 04:57:00.186581 containerd[1866]: time="2025-09-09T04:57:00.186545573Z" level=info msg="StartContainer for \"28fd58fbc6280a1ee667c4e9279d5ab053960b34179bb328357c2078c212892d\" returns successfully" Sep 9 04:57:00.719230 kubelet[3383]: I0909 04:57:00.719194 3383 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 04:57:00.721007 kubelet[3383]: I0909 04:57:00.720981 3383 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 04:57:00.846708 kubelet[3383]: I0909 04:57:00.846651 3383 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-blr4m" podStartSLOduration=25.868708154 podStartE2EDuration="35.84663888s" podCreationTimestamp="2025-09-09 04:56:25 +0000 UTC" firstStartedPulling="2025-09-09 04:56:50.10328406 +0000 UTC m=+43.610235009" lastFinishedPulling="2025-09-09 04:57:00.081214786 +0000 UTC m=+53.588165735" observedRunningTime="2025-09-09 04:57:00.845571142 +0000 UTC m=+54.352522099" watchObservedRunningTime="2025-09-09 04:57:00.84663888 +0000 UTC m=+54.353589829" Sep 9 04:57:11.810498 containerd[1866]: time="2025-09-09T04:57:11.810315918Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2fef469f821f5a8a3c89f16be6a251680bf2e8dee3b39f9b54d9592cad224321\" id:\"85f77319dba1f771cf328061bca4f022ce2d62b64e1aa339e04bb204e1d0ad14\" pid:5867 exited_at:{seconds:1757393831 nanos:810046053}" Sep 9 04:57:16.705582 kubelet[3383]: I0909 04:57:16.705243 3383 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 04:57:19.155901 kubelet[3383]: I0909 04:57:19.155032 3383 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 04:57:24.887847 containerd[1866]: time="2025-09-09T04:57:24.887043130Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7ab369aad34ea978c5361fe21e2660e1fbd965e5cbb965e67fc88c243aca2430\" id:\"bbd3e26bdac77103c7c96350cb97d8428f45df22926846419e58930f18a15917\" pid:5901 exited_at:{seconds:1757393844 nanos:886094797}" Sep 9 04:57:29.861719 containerd[1866]: time="2025-09-09T04:57:29.861674289Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b46dfaa2dd53bff271aa9265db6ab46abb3bf3db4b576273b63aa3682158fd6d\" id:\"74c8e8da39e84a84049b85f7bd7d1218c43373bd557c3a8f6736272352411a92\" pid:5931 exited_at:{seconds:1757393849 nanos:860841191}" Sep 9 04:57:41.648033 containerd[1866]: time="2025-09-09T04:57:41.647940299Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7ab369aad34ea978c5361fe21e2660e1fbd965e5cbb965e67fc88c243aca2430\" id:\"917025a53a0d5c7021d2e78619c0abd9c2aaa04749cabf836df6f484fd3fd0bd\" pid:5955 exited_at:{seconds:1757393861 nanos:645970815}" Sep 9 04:57:41.877306 containerd[1866]: time="2025-09-09T04:57:41.877259232Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2fef469f821f5a8a3c89f16be6a251680bf2e8dee3b39f9b54d9592cad224321\" id:\"1cc9073b9e3340c51750928c4e7167d42c0c81132d7a91cb7414fa5cdfcf59b9\" pid:5978 exited_at:{seconds:1757393861 nanos:876817603}" Sep 9 04:57:45.569734 containerd[1866]: time="2025-09-09T04:57:45.569694155Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b46dfaa2dd53bff271aa9265db6ab46abb3bf3db4b576273b63aa3682158fd6d\" id:\"34540036212f68bcad2444b00ffa5ea28d114d55a5d0319d633bc7684fadcbd3\" pid:6007 exited_at:{seconds:1757393865 nanos:569519014}" Sep 9 04:57:54.871955 containerd[1866]: time="2025-09-09T04:57:54.871764133Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7ab369aad34ea978c5361fe21e2660e1fbd965e5cbb965e67fc88c243aca2430\" id:\"75651c5af6ba762da92d1b6f7918dedc0b5e0ae409eec00881432a816c7850a8\" pid:6030 exited_at:{seconds:1757393874 nanos:871461332}" Sep 9 04:57:59.857494 containerd[1866]: time="2025-09-09T04:57:59.857448464Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b46dfaa2dd53bff271aa9265db6ab46abb3bf3db4b576273b63aa3682158fd6d\" id:\"f6e6d4d04107287718ebabeae656c355b72365ea367195a604b2ab86bfb2184d\" pid:6054 exited_at:{seconds:1757393879 nanos:856909759}" Sep 9 04:58:11.809266 containerd[1866]: time="2025-09-09T04:58:11.809222319Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2fef469f821f5a8a3c89f16be6a251680bf2e8dee3b39f9b54d9592cad224321\" id:\"b06016e8267ded4ed050de44b0b8ba881acf17501579137787fb233990bd3395\" pid:6086 exited_at:{seconds:1757393891 nanos:808891869}" Sep 9 04:58:22.000224 systemd[1]: Started sshd@7-10.200.20.39:22-10.200.16.10:53980.service - OpenSSH per-connection server daemon (10.200.16.10:53980). Sep 9 04:58:22.416627 sshd[6112]: Accepted publickey for core from 10.200.16.10 port 53980 ssh2: RSA SHA256:3adVRBnOdwvD6jSty2nKG447nUgR57TdklVE855KMY8 Sep 9 04:58:22.418452 sshd-session[6112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:58:22.423161 systemd-logind[1850]: New session 10 of user core. Sep 9 04:58:22.429129 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 04:58:22.786380 sshd[6115]: Connection closed by 10.200.16.10 port 53980 Sep 9 04:58:22.787151 sshd-session[6112]: pam_unix(sshd:session): session closed for user core Sep 9 04:58:22.790632 systemd[1]: sshd@7-10.200.20.39:22-10.200.16.10:53980.service: Deactivated successfully. Sep 9 04:58:22.792395 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 04:58:22.793153 systemd-logind[1850]: Session 10 logged out. Waiting for processes to exit. Sep 9 04:58:22.794781 systemd-logind[1850]: Removed session 10. Sep 9 04:58:24.868938 containerd[1866]: time="2025-09-09T04:58:24.868892158Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7ab369aad34ea978c5361fe21e2660e1fbd965e5cbb965e67fc88c243aca2430\" id:\"a35e16e7ed2efceccf75a838e71fdb529fc2ddaba538c5b82eb257d9b69340e2\" pid:6155 exited_at:{seconds:1757393904 nanos:868568548}" Sep 9 04:58:27.868964 systemd[1]: Started sshd@8-10.200.20.39:22-10.200.16.10:53992.service - OpenSSH per-connection server daemon (10.200.16.10:53992). Sep 9 04:58:28.299140 sshd[6166]: Accepted publickey for core from 10.200.16.10 port 53992 ssh2: RSA SHA256:3adVRBnOdwvD6jSty2nKG447nUgR57TdklVE855KMY8 Sep 9 04:58:28.299830 sshd-session[6166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:58:28.307032 systemd-logind[1850]: New session 11 of user core. Sep 9 04:58:28.313149 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 04:58:28.714869 sshd[6169]: Connection closed by 10.200.16.10 port 53992 Sep 9 04:58:28.715672 sshd-session[6166]: pam_unix(sshd:session): session closed for user core Sep 9 04:58:28.718893 systemd-logind[1850]: Session 11 logged out. Waiting for processes to exit. Sep 9 04:58:28.721118 systemd[1]: sshd@8-10.200.20.39:22-10.200.16.10:53992.service: Deactivated successfully. Sep 9 04:58:28.725972 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 04:58:28.729331 systemd-logind[1850]: Removed session 11. Sep 9 04:58:29.854478 containerd[1866]: time="2025-09-09T04:58:29.854405952Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b46dfaa2dd53bff271aa9265db6ab46abb3bf3db4b576273b63aa3682158fd6d\" id:\"4c60eb9fa1b7dfc2d4671605eed1eec38e82e13015ca8c1a2aa2e0e18fb4a1e1\" pid:6194 exited_at:{seconds:1757393909 nanos:854048125}" Sep 9 04:58:33.790791 systemd[1]: Started sshd@9-10.200.20.39:22-10.200.16.10:44572.service - OpenSSH per-connection server daemon (10.200.16.10:44572). Sep 9 04:58:34.212077 sshd[6206]: Accepted publickey for core from 10.200.16.10 port 44572 ssh2: RSA SHA256:3adVRBnOdwvD6jSty2nKG447nUgR57TdklVE855KMY8 Sep 9 04:58:34.213210 sshd-session[6206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:58:34.216973 systemd-logind[1850]: New session 12 of user core. Sep 9 04:58:34.224117 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 04:58:34.580105 sshd[6209]: Connection closed by 10.200.16.10 port 44572 Sep 9 04:58:34.579898 sshd-session[6206]: pam_unix(sshd:session): session closed for user core Sep 9 04:58:34.583946 systemd[1]: sshd@9-10.200.20.39:22-10.200.16.10:44572.service: Deactivated successfully. Sep 9 04:58:34.585918 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 04:58:34.586724 systemd-logind[1850]: Session 12 logged out. Waiting for processes to exit. Sep 9 04:58:34.587802 systemd-logind[1850]: Removed session 12. Sep 9 04:58:34.658983 systemd[1]: Started sshd@10-10.200.20.39:22-10.200.16.10:44580.service - OpenSSH per-connection server daemon (10.200.16.10:44580). Sep 9 04:58:35.090670 sshd[6221]: Accepted publickey for core from 10.200.16.10 port 44580 ssh2: RSA SHA256:3adVRBnOdwvD6jSty2nKG447nUgR57TdklVE855KMY8 Sep 9 04:58:35.091811 sshd-session[6221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:58:35.097096 systemd-logind[1850]: New session 13 of user core. Sep 9 04:58:35.102121 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 04:58:35.476223 sshd[6224]: Connection closed by 10.200.16.10 port 44580 Sep 9 04:58:35.476893 sshd-session[6221]: pam_unix(sshd:session): session closed for user core Sep 9 04:58:35.482181 systemd[1]: sshd@10-10.200.20.39:22-10.200.16.10:44580.service: Deactivated successfully. Sep 9 04:58:35.484607 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 04:58:35.485669 systemd-logind[1850]: Session 13 logged out. Waiting for processes to exit. Sep 9 04:58:35.487912 systemd-logind[1850]: Removed session 13. Sep 9 04:58:35.551202 systemd[1]: Started sshd@11-10.200.20.39:22-10.200.16.10:44588.service - OpenSSH per-connection server daemon (10.200.16.10:44588). Sep 9 04:58:35.964489 sshd[6234]: Accepted publickey for core from 10.200.16.10 port 44588 ssh2: RSA SHA256:3adVRBnOdwvD6jSty2nKG447nUgR57TdklVE855KMY8 Sep 9 04:58:35.965697 sshd-session[6234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:58:35.969451 systemd-logind[1850]: New session 14 of user core. Sep 9 04:58:35.975108 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 04:58:36.325224 sshd[6237]: Connection closed by 10.200.16.10 port 44588 Sep 9 04:58:36.325891 sshd-session[6234]: pam_unix(sshd:session): session closed for user core Sep 9 04:58:36.329268 systemd[1]: sshd@11-10.200.20.39:22-10.200.16.10:44588.service: Deactivated successfully. Sep 9 04:58:36.330824 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 04:58:36.331497 systemd-logind[1850]: Session 14 logged out. Waiting for processes to exit. Sep 9 04:58:36.332775 systemd-logind[1850]: Removed session 14. Sep 9 04:58:41.409093 systemd[1]: Started sshd@12-10.200.20.39:22-10.200.16.10:46030.service - OpenSSH per-connection server daemon (10.200.16.10:46030). Sep 9 04:58:41.615826 containerd[1866]: time="2025-09-09T04:58:41.615782632Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7ab369aad34ea978c5361fe21e2660e1fbd965e5cbb965e67fc88c243aca2430\" id:\"088b2d64f24416117c559863a8d937714355e1c46628a0152babec5a42859dfb\" pid:6269 exited_at:{seconds:1757393921 nanos:615448534}" Sep 9 04:58:41.808094 containerd[1866]: time="2025-09-09T04:58:41.807923741Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2fef469f821f5a8a3c89f16be6a251680bf2e8dee3b39f9b54d9592cad224321\" id:\"4e89d63ca914fb879d1551d6bd9c8cb2646d212d35b204b3f3b74a03b64dbfea\" pid:6290 exit_status:1 exited_at:{seconds:1757393921 nanos:807422886}" Sep 9 04:58:41.836555 sshd[6253]: Accepted publickey for core from 10.200.16.10 port 46030 ssh2: RSA SHA256:3adVRBnOdwvD6jSty2nKG447nUgR57TdklVE855KMY8 Sep 9 04:58:41.838266 sshd-session[6253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:58:41.842155 systemd-logind[1850]: New session 15 of user core. Sep 9 04:58:41.848130 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 04:58:42.205934 sshd[6301]: Connection closed by 10.200.16.10 port 46030 Sep 9 04:58:42.206336 sshd-session[6253]: pam_unix(sshd:session): session closed for user core Sep 9 04:58:42.209372 systemd[1]: sshd@12-10.200.20.39:22-10.200.16.10:46030.service: Deactivated successfully. Sep 9 04:58:42.210906 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 04:58:42.212630 systemd-logind[1850]: Session 15 logged out. Waiting for processes to exit. Sep 9 04:58:42.215271 systemd-logind[1850]: Removed session 15. Sep 9 04:58:45.570496 containerd[1866]: time="2025-09-09T04:58:45.570448976Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b46dfaa2dd53bff271aa9265db6ab46abb3bf3db4b576273b63aa3682158fd6d\" id:\"58f1b744bc5353d538c864d65469f9a7c7002f9288097dbf10077c96b9a1d5d8\" pid:6326 exited_at:{seconds:1757393925 nanos:570055468}" Sep 9 04:58:47.288198 systemd[1]: Started sshd@13-10.200.20.39:22-10.200.16.10:46042.service - OpenSSH per-connection server daemon (10.200.16.10:46042). Sep 9 04:58:47.717916 sshd[6337]: Accepted publickey for core from 10.200.16.10 port 46042 ssh2: RSA SHA256:3adVRBnOdwvD6jSty2nKG447nUgR57TdklVE855KMY8 Sep 9 04:58:47.719126 sshd-session[6337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:58:47.722817 systemd-logind[1850]: New session 16 of user core. Sep 9 04:58:47.731151 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 04:58:48.072517 sshd[6340]: Connection closed by 10.200.16.10 port 46042 Sep 9 04:58:48.073231 sshd-session[6337]: pam_unix(sshd:session): session closed for user core Sep 9 04:58:48.082290 systemd[1]: sshd@13-10.200.20.39:22-10.200.16.10:46042.service: Deactivated successfully. Sep 9 04:58:48.083899 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 04:58:48.086798 systemd-logind[1850]: Session 16 logged out. Waiting for processes to exit. Sep 9 04:58:48.087704 systemd-logind[1850]: Removed session 16. Sep 9 04:58:53.166217 systemd[1]: Started sshd@14-10.200.20.39:22-10.200.16.10:56024.service - OpenSSH per-connection server daemon (10.200.16.10:56024). Sep 9 04:58:53.576224 sshd[6352]: Accepted publickey for core from 10.200.16.10 port 56024 ssh2: RSA SHA256:3adVRBnOdwvD6jSty2nKG447nUgR57TdklVE855KMY8 Sep 9 04:58:53.577301 sshd-session[6352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:58:53.580986 systemd-logind[1850]: New session 17 of user core. Sep 9 04:58:53.588277 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 04:58:53.934876 sshd[6355]: Connection closed by 10.200.16.10 port 56024 Sep 9 04:58:53.935260 sshd-session[6352]: pam_unix(sshd:session): session closed for user core Sep 9 04:58:53.938879 systemd[1]: sshd@14-10.200.20.39:22-10.200.16.10:56024.service: Deactivated successfully. Sep 9 04:58:53.940555 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 04:58:53.941247 systemd-logind[1850]: Session 17 logged out. Waiting for processes to exit. Sep 9 04:58:53.942570 systemd-logind[1850]: Removed session 17. Sep 9 04:58:54.014227 systemd[1]: Started sshd@15-10.200.20.39:22-10.200.16.10:56038.service - OpenSSH per-connection server daemon (10.200.16.10:56038). Sep 9 04:58:54.444152 sshd[6367]: Accepted publickey for core from 10.200.16.10 port 56038 ssh2: RSA SHA256:3adVRBnOdwvD6jSty2nKG447nUgR57TdklVE855KMY8 Sep 9 04:58:54.445335 sshd-session[6367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:58:54.449186 systemd-logind[1850]: New session 18 of user core. Sep 9 04:58:54.460341 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 04:58:54.899367 containerd[1866]: time="2025-09-09T04:58:54.899217591Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7ab369aad34ea978c5361fe21e2660e1fbd965e5cbb965e67fc88c243aca2430\" id:\"67aa39ca3b26af3db0f17246e06672896f3581f078e0331ae7a649ec1072db26\" pid:6389 exited_at:{seconds:1757393934 nanos:898546323}" Sep 9 04:58:54.966022 sshd[6370]: Connection closed by 10.200.16.10 port 56038 Sep 9 04:58:54.968201 sshd-session[6367]: pam_unix(sshd:session): session closed for user core Sep 9 04:58:54.972615 systemd[1]: sshd@15-10.200.20.39:22-10.200.16.10:56038.service: Deactivated successfully. Sep 9 04:58:54.974685 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 04:58:54.977548 systemd-logind[1850]: Session 18 logged out. Waiting for processes to exit. Sep 9 04:58:54.981303 systemd-logind[1850]: Removed session 18. Sep 9 04:58:55.051067 systemd[1]: Started sshd@16-10.200.20.39:22-10.200.16.10:56046.service - OpenSSH per-connection server daemon (10.200.16.10:56046). Sep 9 04:58:55.471637 sshd[6403]: Accepted publickey for core from 10.200.16.10 port 56046 ssh2: RSA SHA256:3adVRBnOdwvD6jSty2nKG447nUgR57TdklVE855KMY8 Sep 9 04:58:55.473469 sshd-session[6403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:58:55.481498 systemd-logind[1850]: New session 19 of user core. Sep 9 04:58:55.484143 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 04:58:56.372108 sshd[6406]: Connection closed by 10.200.16.10 port 56046 Sep 9 04:58:56.372554 sshd-session[6403]: pam_unix(sshd:session): session closed for user core Sep 9 04:58:56.377308 systemd-logind[1850]: Session 19 logged out. Waiting for processes to exit. Sep 9 04:58:56.377860 systemd[1]: sshd@16-10.200.20.39:22-10.200.16.10:56046.service: Deactivated successfully. Sep 9 04:58:56.379802 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 04:58:56.383964 systemd-logind[1850]: Removed session 19. Sep 9 04:58:56.454220 systemd[1]: Started sshd@17-10.200.20.39:22-10.200.16.10:56050.service - OpenSSH per-connection server daemon (10.200.16.10:56050). Sep 9 04:58:56.884817 sshd[6425]: Accepted publickey for core from 10.200.16.10 port 56050 ssh2: RSA SHA256:3adVRBnOdwvD6jSty2nKG447nUgR57TdklVE855KMY8 Sep 9 04:58:56.886454 sshd-session[6425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:58:56.890542 systemd-logind[1850]: New session 20 of user core. Sep 9 04:58:56.895146 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 04:58:57.353881 sshd[6428]: Connection closed by 10.200.16.10 port 56050 Sep 9 04:58:57.353714 sshd-session[6425]: pam_unix(sshd:session): session closed for user core Sep 9 04:58:57.359476 systemd-logind[1850]: Session 20 logged out. Waiting for processes to exit. Sep 9 04:58:57.359634 systemd[1]: sshd@17-10.200.20.39:22-10.200.16.10:56050.service: Deactivated successfully. Sep 9 04:58:57.362684 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 04:58:57.366118 systemd-logind[1850]: Removed session 20. Sep 9 04:58:57.428208 systemd[1]: Started sshd@18-10.200.20.39:22-10.200.16.10:56054.service - OpenSSH per-connection server daemon (10.200.16.10:56054). Sep 9 04:58:57.849744 sshd[6439]: Accepted publickey for core from 10.200.16.10 port 56054 ssh2: RSA SHA256:3adVRBnOdwvD6jSty2nKG447nUgR57TdklVE855KMY8 Sep 9 04:58:57.850906 sshd-session[6439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:58:57.858649 systemd-logind[1850]: New session 21 of user core. Sep 9 04:58:57.862153 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 04:58:58.215459 sshd[6442]: Connection closed by 10.200.16.10 port 56054 Sep 9 04:58:58.216219 sshd-session[6439]: pam_unix(sshd:session): session closed for user core Sep 9 04:58:58.220434 systemd-logind[1850]: Session 21 logged out. Waiting for processes to exit. Sep 9 04:58:58.220493 systemd[1]: sshd@18-10.200.20.39:22-10.200.16.10:56054.service: Deactivated successfully. Sep 9 04:58:58.222588 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 04:58:58.224738 systemd-logind[1850]: Removed session 21. Sep 9 04:58:59.856439 containerd[1866]: time="2025-09-09T04:58:59.856396793Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b46dfaa2dd53bff271aa9265db6ab46abb3bf3db4b576273b63aa3682158fd6d\" id:\"bfa422114f47582258e8794a0a1241233feb4b11b2c582896dd43c2f83df265a\" pid:6463 exited_at:{seconds:1757393939 nanos:856017686}" Sep 9 04:59:03.292310 systemd[1]: Started sshd@19-10.200.20.39:22-10.200.16.10:49550.service - OpenSSH per-connection server daemon (10.200.16.10:49550). Sep 9 04:59:03.719935 sshd[6475]: Accepted publickey for core from 10.200.16.10 port 49550 ssh2: RSA SHA256:3adVRBnOdwvD6jSty2nKG447nUgR57TdklVE855KMY8 Sep 9 04:59:03.721056 sshd-session[6475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:59:03.724726 systemd-logind[1850]: New session 22 of user core. Sep 9 04:59:03.732291 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 04:59:04.062645 sshd[6478]: Connection closed by 10.200.16.10 port 49550 Sep 9 04:59:04.063482 sshd-session[6475]: pam_unix(sshd:session): session closed for user core Sep 9 04:59:04.066738 systemd[1]: sshd@19-10.200.20.39:22-10.200.16.10:49550.service: Deactivated successfully. Sep 9 04:59:04.068538 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 04:59:04.069333 systemd-logind[1850]: Session 22 logged out. Waiting for processes to exit. Sep 9 04:59:04.070810 systemd-logind[1850]: Removed session 22. Sep 9 04:59:09.144122 systemd[1]: Started sshd@20-10.200.20.39:22-10.200.16.10:49560.service - OpenSSH per-connection server daemon (10.200.16.10:49560). Sep 9 04:59:09.563607 sshd[6492]: Accepted publickey for core from 10.200.16.10 port 49560 ssh2: RSA SHA256:3adVRBnOdwvD6jSty2nKG447nUgR57TdklVE855KMY8 Sep 9 04:59:09.565360 sshd-session[6492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:59:09.569511 systemd-logind[1850]: New session 23 of user core. Sep 9 04:59:09.579147 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 04:59:09.926763 sshd[6495]: Connection closed by 10.200.16.10 port 49560 Sep 9 04:59:09.927388 sshd-session[6492]: pam_unix(sshd:session): session closed for user core Sep 9 04:59:09.930306 systemd[1]: sshd@20-10.200.20.39:22-10.200.16.10:49560.service: Deactivated successfully. Sep 9 04:59:09.932725 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 04:59:09.935191 systemd-logind[1850]: Session 23 logged out. Waiting for processes to exit. Sep 9 04:59:09.936784 systemd-logind[1850]: Removed session 23. Sep 9 04:59:11.807243 containerd[1866]: time="2025-09-09T04:59:11.807065176Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2fef469f821f5a8a3c89f16be6a251680bf2e8dee3b39f9b54d9592cad224321\" id:\"c6eccfcb247b70692e4637fc4946fa3b6319144f3f2005c8ea3b32ad0809d832\" pid:6517 exited_at:{seconds:1757393951 nanos:806538792}" Sep 9 04:59:15.014632 systemd[1]: Started sshd@21-10.200.20.39:22-10.200.16.10:51926.service - OpenSSH per-connection server daemon (10.200.16.10:51926). Sep 9 04:59:15.478262 sshd[6530]: Accepted publickey for core from 10.200.16.10 port 51926 ssh2: RSA SHA256:3adVRBnOdwvD6jSty2nKG447nUgR57TdklVE855KMY8 Sep 9 04:59:15.479446 sshd-session[6530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:59:15.485235 systemd-logind[1850]: New session 24 of user core. Sep 9 04:59:15.489700 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 04:59:15.878111 sshd[6533]: Connection closed by 10.200.16.10 port 51926 Sep 9 04:59:15.879577 sshd-session[6530]: pam_unix(sshd:session): session closed for user core Sep 9 04:59:15.882903 systemd-logind[1850]: Session 24 logged out. Waiting for processes to exit. Sep 9 04:59:15.882938 systemd[1]: sshd@21-10.200.20.39:22-10.200.16.10:51926.service: Deactivated successfully. Sep 9 04:59:15.886762 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 04:59:15.888553 systemd-logind[1850]: Removed session 24. Sep 9 04:59:20.961190 systemd[1]: Started sshd@22-10.200.20.39:22-10.200.16.10:44672.service - OpenSSH per-connection server daemon (10.200.16.10:44672). Sep 9 04:59:21.381289 sshd[6545]: Accepted publickey for core from 10.200.16.10 port 44672 ssh2: RSA SHA256:3adVRBnOdwvD6jSty2nKG447nUgR57TdklVE855KMY8 Sep 9 04:59:21.382393 sshd-session[6545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:59:21.386272 systemd-logind[1850]: New session 25 of user core. Sep 9 04:59:21.392127 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 04:59:21.734693 sshd[6548]: Connection closed by 10.200.16.10 port 44672 Sep 9 04:59:21.734986 sshd-session[6545]: pam_unix(sshd:session): session closed for user core Sep 9 04:59:21.739062 systemd[1]: sshd@22-10.200.20.39:22-10.200.16.10:44672.service: Deactivated successfully. Sep 9 04:59:21.740823 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 04:59:21.741668 systemd-logind[1850]: Session 25 logged out. Waiting for processes to exit. Sep 9 04:59:21.742848 systemd-logind[1850]: Removed session 25. Sep 9 04:59:24.867564 containerd[1866]: time="2025-09-09T04:59:24.867465695Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7ab369aad34ea978c5361fe21e2660e1fbd965e5cbb965e67fc88c243aca2430\" id:\"408a14810dea52f7476cf8162519075f3da5df0e4258fa1b2d7db39368cbdf88\" pid:6572 exited_at:{seconds:1757393964 nanos:867142893}" Sep 9 04:59:26.820019 systemd[1]: Started sshd@23-10.200.20.39:22-10.200.16.10:44676.service - OpenSSH per-connection server daemon (10.200.16.10:44676). Sep 9 04:59:27.289698 sshd[6589]: Accepted publickey for core from 10.200.16.10 port 44676 ssh2: RSA SHA256:3adVRBnOdwvD6jSty2nKG447nUgR57TdklVE855KMY8 Sep 9 04:59:27.291219 sshd-session[6589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 04:59:27.297434 systemd-logind[1850]: New session 26 of user core. Sep 9 04:59:27.303312 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 04:59:27.666228 sshd[6592]: Connection closed by 10.200.16.10 port 44676 Sep 9 04:59:27.666612 sshd-session[6589]: pam_unix(sshd:session): session closed for user core Sep 9 04:59:27.672048 systemd[1]: sshd@23-10.200.20.39:22-10.200.16.10:44676.service: Deactivated successfully. Sep 9 04:59:27.674356 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 04:59:27.676047 systemd-logind[1850]: Session 26 logged out. Waiting for processes to exit. Sep 9 04:59:27.677269 systemd-logind[1850]: Removed session 26. Sep 9 04:59:29.854567 containerd[1866]: time="2025-09-09T04:59:29.854527531Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b46dfaa2dd53bff271aa9265db6ab46abb3bf3db4b576273b63aa3682158fd6d\" id:\"7bc23c8557f3f40ccb9d79c799cbfb9e87538d635221d705f68213aa72ad5c31\" pid:6614 exited_at:{seconds:1757393969 nanos:854300972}"