Sep 16 04:40:36.009930 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Sep 16 04:40:36.009947 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Sep 16 03:05:48 -00 2025 Sep 16 04:40:36.009954 kernel: KASLR enabled Sep 16 04:40:36.009958 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 16 04:40:36.009963 kernel: printk: legacy bootconsole [pl11] enabled Sep 16 04:40:36.009967 kernel: efi: EFI v2.7 by EDK II Sep 16 04:40:36.009972 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20f698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Sep 16 04:40:36.009976 kernel: random: crng init done Sep 16 04:40:36.009980 kernel: secureboot: Secure boot disabled Sep 16 04:40:36.009984 kernel: ACPI: Early table checksum verification disabled Sep 16 04:40:36.009987 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 16 04:40:36.009991 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 16 04:40:36.009995 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 16 04:40:36.010000 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 16 04:40:36.010005 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 16 04:40:36.010009 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 16 04:40:36.010014 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 16 04:40:36.010018 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 16 04:40:36.010035 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 16 04:40:36.010039 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 16 04:40:36.010043 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 16 04:40:36.010047 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 16 04:40:36.010052 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 16 04:40:36.010056 kernel: ACPI: Use ACPI SPCR as default console: No Sep 16 04:40:36.010060 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Sep 16 04:40:36.010064 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Sep 16 04:40:36.010068 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Sep 16 04:40:36.010072 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Sep 16 04:40:36.010077 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Sep 16 04:40:36.010082 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Sep 16 04:40:36.010086 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Sep 16 04:40:36.010090 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Sep 16 04:40:36.010094 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Sep 16 04:40:36.010098 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Sep 16 04:40:36.010102 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Sep 16 04:40:36.010107 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Sep 16 04:40:36.010111 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Sep 16 04:40:36.010115 kernel: NODE_DATA(0) allocated [mem 0x1bf7fda00-0x1bf804fff] Sep 16 04:40:36.010119 kernel: Zone ranges: Sep 16 04:40:36.010123 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 16 04:40:36.010130 kernel: DMA32 empty Sep 16 04:40:36.010135 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 16 04:40:36.010139 kernel: Device empty Sep 16 04:40:36.010143 kernel: Movable zone start for each node Sep 16 04:40:36.010148 kernel: Early memory node ranges Sep 16 04:40:36.010153 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 16 04:40:36.010157 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Sep 16 04:40:36.010162 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Sep 16 04:40:36.010166 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Sep 16 04:40:36.010170 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 16 04:40:36.010175 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 16 04:40:36.010179 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 16 04:40:36.010183 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 16 04:40:36.010188 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 16 04:40:36.010192 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 16 04:40:36.010197 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 16 04:40:36.010201 kernel: cma: Reserved 16 MiB at 0x000000003d400000 on node -1 Sep 16 04:40:36.010206 kernel: psci: probing for conduit method from ACPI. Sep 16 04:40:36.010211 kernel: psci: PSCIv1.1 detected in firmware. Sep 16 04:40:36.010215 kernel: psci: Using standard PSCI v0.2 function IDs Sep 16 04:40:36.010219 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 16 04:40:36.010224 kernel: psci: SMC Calling Convention v1.4 Sep 16 04:40:36.010228 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Sep 16 04:40:36.010233 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Sep 16 04:40:36.010237 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 16 04:40:36.010241 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 16 04:40:36.010246 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 16 04:40:36.010250 kernel: Detected PIPT I-cache on CPU0 Sep 16 04:40:36.010256 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Sep 16 04:40:36.010260 kernel: CPU features: detected: GIC system register CPU interface Sep 16 04:40:36.010264 kernel: CPU features: detected: Spectre-v4 Sep 16 04:40:36.010269 kernel: CPU features: detected: Spectre-BHB Sep 16 04:40:36.010273 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 16 04:40:36.010278 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 16 04:40:36.010282 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Sep 16 04:40:36.010286 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 16 04:40:36.010291 kernel: alternatives: applying boot alternatives Sep 16 04:40:36.010296 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=eff5cc3c399cf6fc52e3071751a09276871b099078da6d1b1a498405d04a9313 Sep 16 04:40:36.010301 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 16 04:40:36.010306 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 16 04:40:36.010311 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 16 04:40:36.010315 kernel: Fallback order for Node 0: 0 Sep 16 04:40:36.010319 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Sep 16 04:40:36.010324 kernel: Policy zone: Normal Sep 16 04:40:36.010328 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 16 04:40:36.010333 kernel: software IO TLB: area num 2. Sep 16 04:40:36.010337 kernel: software IO TLB: mapped [mem 0x0000000036280000-0x000000003a280000] (64MB) Sep 16 04:40:36.010341 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 16 04:40:36.010346 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 16 04:40:36.010351 kernel: rcu: RCU event tracing is enabled. Sep 16 04:40:36.010357 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 16 04:40:36.010361 kernel: Trampoline variant of Tasks RCU enabled. Sep 16 04:40:36.010365 kernel: Tracing variant of Tasks RCU enabled. Sep 16 04:40:36.010370 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 16 04:40:36.010374 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 16 04:40:36.010379 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 04:40:36.010383 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 04:40:36.010388 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 16 04:40:36.010392 kernel: GICv3: 960 SPIs implemented Sep 16 04:40:36.010397 kernel: GICv3: 0 Extended SPIs implemented Sep 16 04:40:36.010401 kernel: Root IRQ handler: gic_handle_irq Sep 16 04:40:36.010405 kernel: GICv3: GICv3 features: 16 PPIs, RSS Sep 16 04:40:36.010410 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Sep 16 04:40:36.010415 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 16 04:40:36.010419 kernel: ITS: No ITS available, not enabling LPIs Sep 16 04:40:36.010424 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 16 04:40:36.010428 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Sep 16 04:40:36.010433 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 16 04:40:36.010437 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Sep 16 04:40:36.010442 kernel: Console: colour dummy device 80x25 Sep 16 04:40:36.010446 kernel: printk: legacy console [tty1] enabled Sep 16 04:40:36.010451 kernel: ACPI: Core revision 20240827 Sep 16 04:40:36.010455 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Sep 16 04:40:36.010461 kernel: pid_max: default: 32768 minimum: 301 Sep 16 04:40:36.010465 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 16 04:40:36.010470 kernel: landlock: Up and running. Sep 16 04:40:36.010474 kernel: SELinux: Initializing. Sep 16 04:40:36.010479 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 16 04:40:36.010487 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 16 04:40:36.010492 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 Sep 16 04:40:36.010497 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Sep 16 04:40:36.010502 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 16 04:40:36.010506 kernel: rcu: Hierarchical SRCU implementation. Sep 16 04:40:36.010511 kernel: rcu: Max phase no-delay instances is 400. Sep 16 04:40:36.010517 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 16 04:40:36.010522 kernel: Remapping and enabling EFI services. Sep 16 04:40:36.010526 kernel: smp: Bringing up secondary CPUs ... Sep 16 04:40:36.010531 kernel: Detected PIPT I-cache on CPU1 Sep 16 04:40:36.010536 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 16 04:40:36.010542 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Sep 16 04:40:36.010546 kernel: smp: Brought up 1 node, 2 CPUs Sep 16 04:40:36.010551 kernel: SMP: Total of 2 processors activated. Sep 16 04:40:36.010556 kernel: CPU: All CPU(s) started at EL1 Sep 16 04:40:36.010561 kernel: CPU features: detected: 32-bit EL0 Support Sep 16 04:40:36.010566 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 16 04:40:36.010570 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 16 04:40:36.010575 kernel: CPU features: detected: Common not Private translations Sep 16 04:40:36.010580 kernel: CPU features: detected: CRC32 instructions Sep 16 04:40:36.010586 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Sep 16 04:40:36.010590 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 16 04:40:36.010595 kernel: CPU features: detected: LSE atomic instructions Sep 16 04:40:36.010600 kernel: CPU features: detected: Privileged Access Never Sep 16 04:40:36.010605 kernel: CPU features: detected: Speculation barrier (SB) Sep 16 04:40:36.010609 kernel: CPU features: detected: TLB range maintenance instructions Sep 16 04:40:36.010614 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 16 04:40:36.010619 kernel: CPU features: detected: Scalable Vector Extension Sep 16 04:40:36.010624 kernel: alternatives: applying system-wide alternatives Sep 16 04:40:36.010629 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Sep 16 04:40:36.010634 kernel: SVE: maximum available vector length 16 bytes per vector Sep 16 04:40:36.010639 kernel: SVE: default vector length 16 bytes per vector Sep 16 04:40:36.010644 kernel: Memory: 3959604K/4194160K available (11136K kernel code, 2440K rwdata, 9068K rodata, 38976K init, 1038K bss, 213368K reserved, 16384K cma-reserved) Sep 16 04:40:36.010649 kernel: devtmpfs: initialized Sep 16 04:40:36.010654 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 16 04:40:36.010658 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 16 04:40:36.010663 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 16 04:40:36.010668 kernel: 0 pages in range for non-PLT usage Sep 16 04:40:36.010673 kernel: 508560 pages in range for PLT usage Sep 16 04:40:36.010678 kernel: pinctrl core: initialized pinctrl subsystem Sep 16 04:40:36.010683 kernel: SMBIOS 3.1.0 present. Sep 16 04:40:36.010688 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 16 04:40:36.010692 kernel: DMI: Memory slots populated: 2/2 Sep 16 04:40:36.010697 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 16 04:40:36.010702 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 16 04:40:36.010707 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 16 04:40:36.010712 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 16 04:40:36.010717 kernel: audit: initializing netlink subsys (disabled) Sep 16 04:40:36.010722 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Sep 16 04:40:36.010727 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 16 04:40:36.010732 kernel: cpuidle: using governor menu Sep 16 04:40:36.010737 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 16 04:40:36.010741 kernel: ASID allocator initialised with 32768 entries Sep 16 04:40:36.010746 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 16 04:40:36.010751 kernel: Serial: AMBA PL011 UART driver Sep 16 04:40:36.010755 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 16 04:40:36.010761 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 16 04:40:36.010766 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 16 04:40:36.010771 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 16 04:40:36.010775 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 16 04:40:36.010780 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 16 04:40:36.010785 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 16 04:40:36.010790 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 16 04:40:36.010794 kernel: ACPI: Added _OSI(Module Device) Sep 16 04:40:36.010799 kernel: ACPI: Added _OSI(Processor Device) Sep 16 04:40:36.010805 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 16 04:40:36.010809 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 16 04:40:36.010814 kernel: ACPI: Interpreter enabled Sep 16 04:40:36.010819 kernel: ACPI: Using GIC for interrupt routing Sep 16 04:40:36.010824 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 16 04:40:36.010828 kernel: printk: legacy console [ttyAMA0] enabled Sep 16 04:40:36.010833 kernel: printk: legacy bootconsole [pl11] disabled Sep 16 04:40:36.010838 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 16 04:40:36.010843 kernel: ACPI: CPU0 has been hot-added Sep 16 04:40:36.010848 kernel: ACPI: CPU1 has been hot-added Sep 16 04:40:36.010853 kernel: iommu: Default domain type: Translated Sep 16 04:40:36.010857 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 16 04:40:36.010862 kernel: efivars: Registered efivars operations Sep 16 04:40:36.010867 kernel: vgaarb: loaded Sep 16 04:40:36.010872 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 16 04:40:36.010876 kernel: VFS: Disk quotas dquot_6.6.0 Sep 16 04:40:36.010881 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 16 04:40:36.010886 kernel: pnp: PnP ACPI init Sep 16 04:40:36.010891 kernel: pnp: PnP ACPI: found 0 devices Sep 16 04:40:36.010896 kernel: NET: Registered PF_INET protocol family Sep 16 04:40:36.010901 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 16 04:40:36.010905 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 16 04:40:36.010910 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 16 04:40:36.010915 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 16 04:40:36.010920 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 16 04:40:36.010925 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 16 04:40:36.010929 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 16 04:40:36.010935 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 16 04:40:36.010940 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 16 04:40:36.010944 kernel: PCI: CLS 0 bytes, default 64 Sep 16 04:40:36.010949 kernel: kvm [1]: HYP mode not available Sep 16 04:40:36.010954 kernel: Initialise system trusted keyrings Sep 16 04:40:36.010959 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 16 04:40:36.010963 kernel: Key type asymmetric registered Sep 16 04:40:36.010968 kernel: Asymmetric key parser 'x509' registered Sep 16 04:40:36.010973 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 16 04:40:36.010978 kernel: io scheduler mq-deadline registered Sep 16 04:40:36.010983 kernel: io scheduler kyber registered Sep 16 04:40:36.010988 kernel: io scheduler bfq registered Sep 16 04:40:36.010993 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 16 04:40:36.010997 kernel: thunder_xcv, ver 1.0 Sep 16 04:40:36.011002 kernel: thunder_bgx, ver 1.0 Sep 16 04:40:36.011006 kernel: nicpf, ver 1.0 Sep 16 04:40:36.011011 kernel: nicvf, ver 1.0 Sep 16 04:40:36.011122 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 16 04:40:36.011175 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-16T04:40:35 UTC (1757997635) Sep 16 04:40:36.011181 kernel: efifb: probing for efifb Sep 16 04:40:36.011186 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 16 04:40:36.011191 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 16 04:40:36.011196 kernel: efifb: scrolling: redraw Sep 16 04:40:36.011201 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 16 04:40:36.011206 kernel: Console: switching to colour frame buffer device 128x48 Sep 16 04:40:36.011210 kernel: fb0: EFI VGA frame buffer device Sep 16 04:40:36.011216 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 16 04:40:36.011221 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 16 04:40:36.011226 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 16 04:40:36.011231 kernel: NET: Registered PF_INET6 protocol family Sep 16 04:40:36.011236 kernel: watchdog: NMI not fully supported Sep 16 04:40:36.011240 kernel: watchdog: Hard watchdog permanently disabled Sep 16 04:40:36.011245 kernel: Segment Routing with IPv6 Sep 16 04:40:36.011250 kernel: In-situ OAM (IOAM) with IPv6 Sep 16 04:40:36.011255 kernel: NET: Registered PF_PACKET protocol family Sep 16 04:40:36.011260 kernel: Key type dns_resolver registered Sep 16 04:40:36.011265 kernel: registered taskstats version 1 Sep 16 04:40:36.011270 kernel: Loading compiled-in X.509 certificates Sep 16 04:40:36.011275 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: 99eb88579c3d58869b2224a85ec8efa5647af805' Sep 16 04:40:36.011279 kernel: Demotion targets for Node 0: null Sep 16 04:40:36.011284 kernel: Key type .fscrypt registered Sep 16 04:40:36.011289 kernel: Key type fscrypt-provisioning registered Sep 16 04:40:36.011294 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 16 04:40:36.011298 kernel: ima: Allocated hash algorithm: sha1 Sep 16 04:40:36.011304 kernel: ima: No architecture policies found Sep 16 04:40:36.011309 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 16 04:40:36.011313 kernel: clk: Disabling unused clocks Sep 16 04:40:36.011318 kernel: PM: genpd: Disabling unused power domains Sep 16 04:40:36.011323 kernel: Warning: unable to open an initial console. Sep 16 04:40:36.011328 kernel: Freeing unused kernel memory: 38976K Sep 16 04:40:36.011332 kernel: Run /init as init process Sep 16 04:40:36.011337 kernel: with arguments: Sep 16 04:40:36.011342 kernel: /init Sep 16 04:40:36.011347 kernel: with environment: Sep 16 04:40:36.011352 kernel: HOME=/ Sep 16 04:40:36.011357 kernel: TERM=linux Sep 16 04:40:36.011361 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 16 04:40:36.011367 systemd[1]: Successfully made /usr/ read-only. Sep 16 04:40:36.011374 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:40:36.011379 systemd[1]: Detected virtualization microsoft. Sep 16 04:40:36.011385 systemd[1]: Detected architecture arm64. Sep 16 04:40:36.011390 systemd[1]: Running in initrd. Sep 16 04:40:36.011395 systemd[1]: No hostname configured, using default hostname. Sep 16 04:40:36.011401 systemd[1]: Hostname set to . Sep 16 04:40:36.011406 systemd[1]: Initializing machine ID from random generator. Sep 16 04:40:36.011411 systemd[1]: Queued start job for default target initrd.target. Sep 16 04:40:36.011416 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:40:36.011421 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:40:36.011427 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 16 04:40:36.011433 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:40:36.011438 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 16 04:40:36.011444 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 16 04:40:36.011450 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 16 04:40:36.011455 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 16 04:40:36.011460 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:40:36.011466 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:40:36.011471 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:40:36.011477 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:40:36.011482 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:40:36.011487 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:40:36.011492 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:40:36.011497 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:40:36.011502 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 16 04:40:36.011508 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 16 04:40:36.011514 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:40:36.011519 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:40:36.011524 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:40:36.011529 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:40:36.011534 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 16 04:40:36.011539 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:40:36.011545 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 16 04:40:36.011550 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 16 04:40:36.011556 systemd[1]: Starting systemd-fsck-usr.service... Sep 16 04:40:36.011561 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:40:36.011566 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:40:36.011582 systemd-journald[224]: Collecting audit messages is disabled. Sep 16 04:40:36.011596 systemd-journald[224]: Journal started Sep 16 04:40:36.011610 systemd-journald[224]: Runtime Journal (/run/log/journal/a221170635a04dc9b21f7fb47ed00693) is 8M, max 78.5M, 70.5M free. Sep 16 04:40:36.024737 systemd-modules-load[226]: Inserted module 'overlay' Sep 16 04:40:36.030558 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:40:36.045004 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 16 04:40:36.045042 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:40:36.045052 kernel: Bridge firewalling registered Sep 16 04:40:36.047003 systemd-modules-load[226]: Inserted module 'br_netfilter' Sep 16 04:40:36.054304 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 16 04:40:36.063002 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:40:36.067955 systemd[1]: Finished systemd-fsck-usr.service. Sep 16 04:40:36.075399 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:40:36.081963 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:40:36.091845 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 16 04:40:36.105984 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:40:36.117182 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 16 04:40:36.133176 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:40:36.145043 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:40:36.145078 systemd-tmpfiles[256]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 16 04:40:36.155863 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:40:36.165049 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 04:40:36.173831 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:40:36.185225 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 16 04:40:36.200766 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:40:36.214131 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:40:36.227636 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=eff5cc3c399cf6fc52e3071751a09276871b099078da6d1b1a498405d04a9313 Sep 16 04:40:36.257662 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:40:36.259484 systemd-resolved[263]: Positive Trust Anchors: Sep 16 04:40:36.259492 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:40:36.259514 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:40:36.261069 systemd-resolved[263]: Defaulting to hostname 'linux'. Sep 16 04:40:36.267284 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:40:36.276528 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:40:36.367040 kernel: SCSI subsystem initialized Sep 16 04:40:36.373031 kernel: Loading iSCSI transport class v2.0-870. Sep 16 04:40:36.380047 kernel: iscsi: registered transport (tcp) Sep 16 04:40:36.392996 kernel: iscsi: registered transport (qla4xxx) Sep 16 04:40:36.393045 kernel: QLogic iSCSI HBA Driver Sep 16 04:40:36.407112 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:40:36.425180 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:40:36.430856 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:40:36.475394 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 16 04:40:36.484148 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 16 04:40:36.541036 kernel: raid6: neonx8 gen() 18540 MB/s Sep 16 04:40:36.560028 kernel: raid6: neonx4 gen() 18568 MB/s Sep 16 04:40:36.579026 kernel: raid6: neonx2 gen() 17103 MB/s Sep 16 04:40:36.599115 kernel: raid6: neonx1 gen() 15011 MB/s Sep 16 04:40:36.619111 kernel: raid6: int64x8 gen() 10535 MB/s Sep 16 04:40:36.638101 kernel: raid6: int64x4 gen() 10611 MB/s Sep 16 04:40:36.657102 kernel: raid6: int64x2 gen() 8992 MB/s Sep 16 04:40:36.678781 kernel: raid6: int64x1 gen() 7016 MB/s Sep 16 04:40:36.678790 kernel: raid6: using algorithm neonx4 gen() 18568 MB/s Sep 16 04:40:36.700046 kernel: raid6: .... xor() 15147 MB/s, rmw enabled Sep 16 04:40:36.700082 kernel: raid6: using neon recovery algorithm Sep 16 04:40:36.707030 kernel: xor: measuring software checksum speed Sep 16 04:40:36.707064 kernel: 8regs : 27338 MB/sec Sep 16 04:40:36.712147 kernel: 32regs : 28820 MB/sec Sep 16 04:40:36.714875 kernel: arm64_neon : 37690 MB/sec Sep 16 04:40:36.717824 kernel: xor: using function: arm64_neon (37690 MB/sec) Sep 16 04:40:36.756045 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 16 04:40:36.760921 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:40:36.770176 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:40:36.803430 systemd-udevd[475]: Using default interface naming scheme 'v255'. Sep 16 04:40:36.810080 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:40:36.823001 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 16 04:40:36.851779 dracut-pre-trigger[484]: rd.md=0: removing MD RAID activation Sep 16 04:40:36.873694 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:40:36.879284 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:40:36.928354 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:40:36.934706 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 16 04:40:37.007045 kernel: hv_vmbus: Vmbus version:5.3 Sep 16 04:40:37.008222 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:40:37.008329 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:40:37.024521 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:40:37.039203 kernel: hv_vmbus: registering driver hid_hyperv Sep 16 04:40:37.039226 kernel: hv_vmbus: registering driver hv_netvsc Sep 16 04:40:37.039240 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 16 04:40:37.040045 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Sep 16 04:40:37.054227 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 16 04:40:37.054253 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Sep 16 04:40:37.054262 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 16 04:40:37.064153 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 16 04:40:37.047484 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:40:37.070358 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:40:37.080143 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:40:37.103202 kernel: PTP clock support registered Sep 16 04:40:37.103229 kernel: hv_vmbus: registering driver hv_storvsc Sep 16 04:40:37.103236 kernel: scsi host0: storvsc_host_t Sep 16 04:40:37.103377 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 16 04:40:37.103394 kernel: scsi host1: storvsc_host_t Sep 16 04:40:37.080210 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:40:37.116660 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Sep 16 04:40:37.102141 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:40:37.134684 kernel: hv_utils: Registering HyperV Utility Driver Sep 16 04:40:37.134713 kernel: hv_vmbus: registering driver hv_utils Sep 16 04:40:37.143782 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:40:37.026527 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 16 04:40:37.038520 kernel: hv_utils: Shutdown IC version 3.2 Sep 16 04:40:37.038532 kernel: hv_utils: Heartbeat IC version 3.0 Sep 16 04:40:37.038540 kernel: hv_utils: TimeSync IC version 4.0 Sep 16 04:40:37.038545 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 16 04:40:37.038659 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 16 04:40:37.038747 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 16 04:40:37.038818 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 16 04:40:37.038879 kernel: hv_netvsc 000d3af6-385b-000d-3af6-385b000d3af6 eth0: VF slot 1 added Sep 16 04:40:37.038964 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 16 04:40:37.039038 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 16 04:40:37.039104 systemd-journald[224]: Time jumped backwards, rotating. Sep 16 04:40:37.002520 systemd-resolved[263]: Clock change detected. Flushing caches. Sep 16 04:40:37.047859 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 16 04:40:37.047886 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 16 04:40:37.054942 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 16 04:40:37.055096 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 16 04:40:37.056676 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 16 04:40:37.067505 kernel: hv_vmbus: registering driver hv_pci Sep 16 04:40:37.067548 kernel: hv_pci e34cda58-e790-47b3-a4d3-fdcec6f3207a: PCI VMBus probing: Using version 0x10004 Sep 16 04:40:37.078497 kernel: hv_pci e34cda58-e790-47b3-a4d3-fdcec6f3207a: PCI host bridge to bus e790:00 Sep 16 04:40:37.078657 kernel: pci_bus e790:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 16 04:40:37.082923 kernel: pci_bus e790:00: No busn resource found for root bus, will use [bus 00-ff] Sep 16 04:40:37.088752 kernel: pci e790:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Sep 16 04:40:37.088818 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#182 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 16 04:40:37.097761 kernel: pci e790:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 16 04:40:37.104693 kernel: pci e790:00:02.0: enabling Extended Tags Sep 16 04:40:37.119626 kernel: pci e790:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at e790:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Sep 16 04:40:37.119662 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 16 04:40:37.129154 kernel: pci_bus e790:00: busn_res: [bus 00-ff] end is updated to 00 Sep 16 04:40:37.133374 kernel: pci e790:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Sep 16 04:40:37.191728 kernel: mlx5_core e790:00:02.0: enabling device (0000 -> 0002) Sep 16 04:40:37.199261 kernel: mlx5_core e790:00:02.0: PTM is not supported by PCIe Sep 16 04:40:37.199382 kernel: mlx5_core e790:00:02.0: firmware version: 16.30.5006 Sep 16 04:40:37.371817 kernel: hv_netvsc 000d3af6-385b-000d-3af6-385b000d3af6 eth0: VF registering: eth1 Sep 16 04:40:37.372008 kernel: mlx5_core e790:00:02.0 eth1: joined to eth0 Sep 16 04:40:37.376932 kernel: mlx5_core e790:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Sep 16 04:40:37.386626 kernel: mlx5_core e790:00:02.0 enP59280s1: renamed from eth1 Sep 16 04:40:37.615473 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 16 04:40:37.669144 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 16 04:40:37.692756 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 16 04:40:37.697962 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 16 04:40:37.713939 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 16 04:40:37.719748 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 16 04:40:37.728301 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:40:37.735729 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:40:37.744549 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:40:37.753343 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 16 04:40:37.777235 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 16 04:40:37.797633 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#252 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 16 04:40:37.805675 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 16 04:40:37.808206 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:40:38.818925 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Sep 16 04:40:38.831699 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 16 04:40:38.832494 disk-uuid[656]: The operation has completed successfully. Sep 16 04:40:38.905428 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 16 04:40:38.906636 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 16 04:40:38.927385 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 16 04:40:38.943637 sh[821]: Success Sep 16 04:40:38.973152 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 16 04:40:38.973210 kernel: device-mapper: uevent: version 1.0.3 Sep 16 04:40:38.977542 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 16 04:40:38.985676 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 16 04:40:39.296745 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 16 04:40:39.301448 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 16 04:40:39.316134 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 16 04:40:39.337629 kernel: BTRFS: device fsid 782b6948-7aaa-439e-9946-c8fdb4d8f287 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (840) Sep 16 04:40:39.345863 kernel: BTRFS info (device dm-0): first mount of filesystem 782b6948-7aaa-439e-9946-c8fdb4d8f287 Sep 16 04:40:39.345882 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 16 04:40:39.811715 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 16 04:40:39.811785 kernel: BTRFS info (device dm-0): enabling free space tree Sep 16 04:40:39.840150 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 16 04:40:39.844177 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:40:39.851244 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 16 04:40:39.852732 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 16 04:40:39.871140 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 16 04:40:39.901621 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (878) Sep 16 04:40:39.912121 kernel: BTRFS info (device sda6): first mount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:40:39.912150 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 16 04:40:39.955341 kernel: BTRFS info (device sda6): turning on async discard Sep 16 04:40:39.955384 kernel: BTRFS info (device sda6): enabling free space tree Sep 16 04:40:39.963656 kernel: BTRFS info (device sda6): last unmount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:40:39.965649 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 16 04:40:39.972752 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 16 04:40:39.988906 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:40:39.999266 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:40:40.032113 systemd-networkd[1009]: lo: Link UP Sep 16 04:40:40.032121 systemd-networkd[1009]: lo: Gained carrier Sep 16 04:40:40.033087 systemd-networkd[1009]: Enumeration completed Sep 16 04:40:40.034526 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:40:40.037697 systemd-networkd[1009]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:40:40.037700 systemd-networkd[1009]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:40:40.041660 systemd[1]: Reached target network.target - Network. Sep 16 04:40:40.110779 kernel: mlx5_core e790:00:02.0 enP59280s1: Link up Sep 16 04:40:40.110991 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 16 04:40:40.145074 systemd-networkd[1009]: enP59280s1: Link UP Sep 16 04:40:40.148152 kernel: hv_netvsc 000d3af6-385b-000d-3af6-385b000d3af6 eth0: Data path switched to VF: enP59280s1 Sep 16 04:40:40.145132 systemd-networkd[1009]: eth0: Link UP Sep 16 04:40:40.145228 systemd-networkd[1009]: eth0: Gained carrier Sep 16 04:40:40.145240 systemd-networkd[1009]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:40:40.162782 systemd-networkd[1009]: enP59280s1: Gained carrier Sep 16 04:40:40.177647 systemd-networkd[1009]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 16 04:40:41.359589 ignition[1004]: Ignition 2.22.0 Sep 16 04:40:41.359625 ignition[1004]: Stage: fetch-offline Sep 16 04:40:41.366028 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:40:41.359740 ignition[1004]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:40:41.375850 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 16 04:40:41.359746 ignition[1004]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 16 04:40:41.359828 ignition[1004]: parsed url from cmdline: "" Sep 16 04:40:41.359830 ignition[1004]: no config URL provided Sep 16 04:40:41.359833 ignition[1004]: reading system config file "/usr/lib/ignition/user.ign" Sep 16 04:40:41.359838 ignition[1004]: no config at "/usr/lib/ignition/user.ign" Sep 16 04:40:41.359841 ignition[1004]: failed to fetch config: resource requires networking Sep 16 04:40:41.359962 ignition[1004]: Ignition finished successfully Sep 16 04:40:41.409485 ignition[1020]: Ignition 2.22.0 Sep 16 04:40:41.411763 ignition[1020]: Stage: fetch Sep 16 04:40:41.411938 ignition[1020]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:40:41.411945 ignition[1020]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 16 04:40:41.412004 ignition[1020]: parsed url from cmdline: "" Sep 16 04:40:41.412006 ignition[1020]: no config URL provided Sep 16 04:40:41.412009 ignition[1020]: reading system config file "/usr/lib/ignition/user.ign" Sep 16 04:40:41.412015 ignition[1020]: no config at "/usr/lib/ignition/user.ign" Sep 16 04:40:41.412030 ignition[1020]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 16 04:40:41.546930 ignition[1020]: GET result: OK Sep 16 04:40:41.547003 ignition[1020]: config has been read from IMDS userdata Sep 16 04:40:41.547027 ignition[1020]: parsing config with SHA512: 720ac735cf189bd01a37c7a4dd8b9f5a34295dbe94b048ab80ec9993e7b6eeed13b007859e7ab4d9da3997a8603a022c9d52eb3edb6e7c77fb6ea0d702c29e99 Sep 16 04:40:41.551052 unknown[1020]: fetched base config from "system" Sep 16 04:40:41.552683 ignition[1020]: fetch: fetch complete Sep 16 04:40:41.551057 unknown[1020]: fetched base config from "system" Sep 16 04:40:41.552688 ignition[1020]: fetch: fetch passed Sep 16 04:40:41.551063 unknown[1020]: fetched user config from "azure" Sep 16 04:40:41.552732 ignition[1020]: Ignition finished successfully Sep 16 04:40:41.555904 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 16 04:40:41.561348 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 16 04:40:41.597256 ignition[1026]: Ignition 2.22.0 Sep 16 04:40:41.597271 ignition[1026]: Stage: kargs Sep 16 04:40:41.600721 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 16 04:40:41.597430 ignition[1026]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:40:41.605854 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 16 04:40:41.597436 ignition[1026]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 16 04:40:41.597973 ignition[1026]: kargs: kargs passed Sep 16 04:40:41.598018 ignition[1026]: Ignition finished successfully Sep 16 04:40:41.640731 ignition[1032]: Ignition 2.22.0 Sep 16 04:40:41.640744 ignition[1032]: Stage: disks Sep 16 04:40:41.646702 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 16 04:40:41.640900 ignition[1032]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:40:41.640907 ignition[1032]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 16 04:40:41.654189 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 16 04:40:41.641382 ignition[1032]: disks: disks passed Sep 16 04:40:41.661841 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 16 04:40:41.641416 ignition[1032]: Ignition finished successfully Sep 16 04:40:41.670300 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:40:41.677899 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:40:41.683792 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:40:41.692045 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 16 04:40:41.751052 systemd-fsck[1041]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Sep 16 04:40:41.756949 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 16 04:40:41.762156 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 16 04:40:42.032721 systemd-networkd[1009]: eth0: Gained IPv6LL Sep 16 04:40:43.382639 kernel: EXT4-fs (sda9): mounted filesystem a00d22d9-68b1-4a84-acfc-9fae1fca53dd r/w with ordered data mode. Quota mode: none. Sep 16 04:40:43.382990 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 16 04:40:43.386498 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 16 04:40:43.415257 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:40:43.441343 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 16 04:40:43.457204 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1055) Sep 16 04:40:43.457220 kernel: BTRFS info (device sda6): first mount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:40:43.457227 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 16 04:40:43.458730 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 16 04:40:43.464017 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 16 04:40:43.491771 kernel: BTRFS info (device sda6): turning on async discard Sep 16 04:40:43.491788 kernel: BTRFS info (device sda6): enabling free space tree Sep 16 04:40:43.464042 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:40:43.473495 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 16 04:40:43.494453 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:40:43.502409 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 16 04:40:44.201983 coreos-metadata[1057]: Sep 16 04:40:44.201 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 16 04:40:44.207737 coreos-metadata[1057]: Sep 16 04:40:44.206 INFO Fetch successful Sep 16 04:40:44.207737 coreos-metadata[1057]: Sep 16 04:40:44.206 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 16 04:40:44.219156 coreos-metadata[1057]: Sep 16 04:40:44.219 INFO Fetch successful Sep 16 04:40:44.229831 coreos-metadata[1057]: Sep 16 04:40:44.229 INFO wrote hostname ci-4459.0.0-n-404d4275b5 to /sysroot/etc/hostname Sep 16 04:40:44.236014 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 16 04:40:44.449844 initrd-setup-root[1085]: cut: /sysroot/etc/passwd: No such file or directory Sep 16 04:40:44.476402 initrd-setup-root[1092]: cut: /sysroot/etc/group: No such file or directory Sep 16 04:40:44.497025 initrd-setup-root[1099]: cut: /sysroot/etc/shadow: No such file or directory Sep 16 04:40:44.512437 initrd-setup-root[1106]: cut: /sysroot/etc/gshadow: No such file or directory Sep 16 04:40:45.671025 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 16 04:40:45.676391 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 16 04:40:45.692199 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 16 04:40:45.702814 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 16 04:40:45.711166 kernel: BTRFS info (device sda6): last unmount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:40:45.733140 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 16 04:40:45.739900 ignition[1174]: INFO : Ignition 2.22.0 Sep 16 04:40:45.739900 ignition[1174]: INFO : Stage: mount Sep 16 04:40:45.739900 ignition[1174]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:40:45.739900 ignition[1174]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 16 04:40:45.739900 ignition[1174]: INFO : mount: mount passed Sep 16 04:40:45.739900 ignition[1174]: INFO : Ignition finished successfully Sep 16 04:40:45.740622 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 16 04:40:45.749837 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 16 04:40:45.775268 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:40:45.792619 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1186) Sep 16 04:40:45.801910 kernel: BTRFS info (device sda6): first mount of filesystem a546938e-7af2-44ea-b88d-218d567c463b Sep 16 04:40:45.801932 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 16 04:40:45.811920 kernel: BTRFS info (device sda6): turning on async discard Sep 16 04:40:45.811934 kernel: BTRFS info (device sda6): enabling free space tree Sep 16 04:40:45.813309 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:40:45.843814 ignition[1203]: INFO : Ignition 2.22.0 Sep 16 04:40:45.843814 ignition[1203]: INFO : Stage: files Sep 16 04:40:45.849415 ignition[1203]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:40:45.849415 ignition[1203]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 16 04:40:45.849415 ignition[1203]: DEBUG : files: compiled without relabeling support, skipping Sep 16 04:40:45.868228 ignition[1203]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 16 04:40:45.868228 ignition[1203]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 16 04:40:45.921824 ignition[1203]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 16 04:40:45.927155 ignition[1203]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 16 04:40:45.927155 ignition[1203]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 16 04:40:45.922134 unknown[1203]: wrote ssh authorized keys file for user: core Sep 16 04:40:45.974535 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 16 04:40:45.981868 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 16 04:40:46.370688 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 16 04:40:46.834939 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 16 04:40:46.834939 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 16 04:40:46.847969 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 16 04:40:46.847969 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:40:46.847969 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:40:46.847969 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:40:46.847969 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:40:46.847969 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:40:46.847969 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:40:46.892555 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:40:46.892555 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:40:46.892555 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 16 04:40:46.892555 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 16 04:40:46.892555 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 16 04:40:46.892555 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 16 04:40:47.351086 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 16 04:40:47.769577 ignition[1203]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 16 04:40:47.769577 ignition[1203]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 16 04:40:47.812127 ignition[1203]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:40:47.828237 ignition[1203]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:40:47.828237 ignition[1203]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 16 04:40:47.839948 ignition[1203]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 16 04:40:47.839948 ignition[1203]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 16 04:40:47.839948 ignition[1203]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 16 04:40:47.839948 ignition[1203]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 16 04:40:47.839948 ignition[1203]: INFO : files: files passed Sep 16 04:40:47.839948 ignition[1203]: INFO : Ignition finished successfully Sep 16 04:40:47.840016 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 16 04:40:47.853184 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 16 04:40:47.882162 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 16 04:40:47.892894 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 16 04:40:47.892960 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 16 04:40:47.917112 initrd-setup-root-after-ignition[1231]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:40:47.917112 initrd-setup-root-after-ignition[1231]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:40:47.906495 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 04:40:47.942359 initrd-setup-root-after-ignition[1235]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:40:47.913491 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 16 04:40:47.922708 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 16 04:40:47.970920 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 16 04:40:47.974617 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 16 04:40:47.979640 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 16 04:40:47.987766 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 16 04:40:47.995208 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 16 04:40:47.995933 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 16 04:40:48.030826 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 04:40:48.036885 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 16 04:40:48.057671 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:40:48.062439 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:40:48.071219 systemd[1]: Stopped target timers.target - Timer Units. Sep 16 04:40:48.078899 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 16 04:40:48.078994 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 04:40:48.090457 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 16 04:40:48.094549 systemd[1]: Stopped target basic.target - Basic System. Sep 16 04:40:48.102749 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 16 04:40:48.110603 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:40:48.118418 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 16 04:40:48.126247 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:40:48.134758 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 16 04:40:48.142540 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:40:48.151560 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 16 04:40:48.158927 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 16 04:40:48.167791 systemd[1]: Stopped target swap.target - Swaps. Sep 16 04:40:48.175581 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 16 04:40:48.175703 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:40:48.196768 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:40:48.200981 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:40:48.208979 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 16 04:40:48.209051 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:40:48.217965 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 16 04:40:48.218064 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 16 04:40:48.230292 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 16 04:40:48.230372 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 04:40:48.235015 systemd[1]: ignition-files.service: Deactivated successfully. Sep 16 04:40:48.235084 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 16 04:40:48.242372 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 16 04:40:48.242433 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 16 04:40:48.311214 ignition[1256]: INFO : Ignition 2.22.0 Sep 16 04:40:48.311214 ignition[1256]: INFO : Stage: umount Sep 16 04:40:48.311214 ignition[1256]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:40:48.311214 ignition[1256]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 16 04:40:48.311214 ignition[1256]: INFO : umount: umount passed Sep 16 04:40:48.311214 ignition[1256]: INFO : Ignition finished successfully Sep 16 04:40:48.256754 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 16 04:40:48.265484 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 16 04:40:48.265581 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:40:48.282722 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 16 04:40:48.297629 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 16 04:40:48.297828 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:40:48.308565 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 16 04:40:48.308720 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:40:48.319462 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 16 04:40:48.321107 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 16 04:40:48.333880 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 16 04:40:48.336388 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 16 04:40:48.336469 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 16 04:40:48.345047 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 16 04:40:48.345106 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 16 04:40:48.351574 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 16 04:40:48.351629 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 16 04:40:48.359511 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 16 04:40:48.359550 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 16 04:40:48.371914 systemd[1]: Stopped target network.target - Network. Sep 16 04:40:48.379370 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 16 04:40:48.379424 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:40:48.389023 systemd[1]: Stopped target paths.target - Path Units. Sep 16 04:40:48.395699 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 16 04:40:48.400582 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:40:48.405728 systemd[1]: Stopped target slices.target - Slice Units. Sep 16 04:40:48.412860 systemd[1]: Stopped target sockets.target - Socket Units. Sep 16 04:40:48.421252 systemd[1]: iscsid.socket: Deactivated successfully. Sep 16 04:40:48.421295 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:40:48.428417 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 16 04:40:48.428439 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:40:48.435930 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 16 04:40:48.435969 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 16 04:40:48.444155 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 16 04:40:48.444180 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 16 04:40:48.452337 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 16 04:40:48.459336 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 16 04:40:48.473418 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 16 04:40:48.473497 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 16 04:40:48.486714 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 16 04:40:48.486874 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 16 04:40:48.486966 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 16 04:40:48.499248 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 16 04:40:48.499426 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 16 04:40:48.499517 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 16 04:40:48.507647 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 16 04:40:48.514357 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 16 04:40:48.514398 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:40:48.678706 kernel: hv_netvsc 000d3af6-385b-000d-3af6-385b000d3af6 eth0: Data path switched from VF: enP59280s1 Sep 16 04:40:48.522226 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 16 04:40:48.522273 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 16 04:40:48.535718 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 16 04:40:48.543526 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 16 04:40:48.543580 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:40:48.551009 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 16 04:40:48.551052 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:40:48.561179 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 16 04:40:48.561215 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 16 04:40:48.565980 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 16 04:40:48.566014 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:40:48.577202 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:40:48.585141 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 16 04:40:48.585194 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:40:48.610943 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 16 04:40:48.611072 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:40:48.620396 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 16 04:40:48.620432 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 16 04:40:48.627873 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 16 04:40:48.627898 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:40:48.635897 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 16 04:40:48.635935 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:40:48.647940 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 16 04:40:48.647977 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 16 04:40:48.665720 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 16 04:40:48.665771 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:40:48.683775 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 16 04:40:48.696196 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 16 04:40:48.696260 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:40:48.710389 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 16 04:40:48.710433 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:40:48.720826 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:40:48.720868 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:40:48.857451 systemd-journald[224]: Received SIGTERM from PID 1 (systemd). Sep 16 04:40:48.729936 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 16 04:40:48.729981 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 16 04:40:48.730007 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:40:48.730413 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 16 04:40:48.730520 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 16 04:40:48.745525 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 16 04:40:48.747655 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 16 04:40:48.758885 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 16 04:40:48.767767 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 16 04:40:48.791899 systemd[1]: Switching root. Sep 16 04:40:48.896334 systemd-journald[224]: Journal stopped Sep 16 04:40:56.658487 kernel: SELinux: policy capability network_peer_controls=1 Sep 16 04:40:56.658506 kernel: SELinux: policy capability open_perms=1 Sep 16 04:40:56.658513 kernel: SELinux: policy capability extended_socket_class=1 Sep 16 04:40:56.658519 kernel: SELinux: policy capability always_check_network=0 Sep 16 04:40:56.658525 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 16 04:40:56.658532 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 16 04:40:56.658538 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 16 04:40:56.658543 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 16 04:40:56.658548 kernel: SELinux: policy capability userspace_initial_context=0 Sep 16 04:40:56.658554 kernel: audit: type=1403 audit(1757997650.473:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 16 04:40:56.658561 systemd[1]: Successfully loaded SELinux policy in 175.483ms. Sep 16 04:40:56.658568 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.364ms. Sep 16 04:40:56.658575 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:40:56.658581 systemd[1]: Detected virtualization microsoft. Sep 16 04:40:56.658588 systemd[1]: Detected architecture arm64. Sep 16 04:40:56.658594 systemd[1]: Detected first boot. Sep 16 04:40:56.658601 systemd[1]: Hostname set to . Sep 16 04:40:56.658616 systemd[1]: Initializing machine ID from random generator. Sep 16 04:40:56.658623 zram_generator::config[1299]: No configuration found. Sep 16 04:40:56.658629 kernel: NET: Registered PF_VSOCK protocol family Sep 16 04:40:56.658634 systemd[1]: Populated /etc with preset unit settings. Sep 16 04:40:56.658641 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 16 04:40:56.658648 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 16 04:40:56.658654 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 16 04:40:56.658659 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 16 04:40:56.658665 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 16 04:40:56.658672 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 16 04:40:56.658679 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 16 04:40:56.658685 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 16 04:40:56.658692 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 16 04:40:56.658698 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 16 04:40:56.658704 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 16 04:40:56.658710 systemd[1]: Created slice user.slice - User and Session Slice. Sep 16 04:40:56.658716 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:40:56.658722 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:40:56.658728 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 16 04:40:56.658734 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 16 04:40:56.658740 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 16 04:40:56.658747 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:40:56.658753 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 16 04:40:56.658761 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:40:56.658767 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:40:56.658773 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 16 04:40:56.658779 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 16 04:40:56.658785 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 16 04:40:56.658792 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 16 04:40:56.658798 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:40:56.658804 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:40:56.658810 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:40:56.658816 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:40:56.658822 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 16 04:40:56.658829 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 16 04:40:56.658836 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 16 04:40:56.658842 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:40:56.658849 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:40:56.658855 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:40:56.658861 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 16 04:40:56.658867 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 16 04:40:56.658874 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 16 04:40:56.658880 systemd[1]: Mounting media.mount - External Media Directory... Sep 16 04:40:56.658886 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 16 04:40:56.658892 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 16 04:40:56.658899 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 16 04:40:56.658905 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 16 04:40:56.658911 systemd[1]: Reached target machines.target - Containers. Sep 16 04:40:56.658917 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 16 04:40:56.658925 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:40:56.658931 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:40:56.658938 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 16 04:40:56.658944 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:40:56.658951 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 04:40:56.658957 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:40:56.658963 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 16 04:40:56.658969 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:40:56.658975 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 16 04:40:56.658982 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 16 04:40:56.658988 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 16 04:40:56.658994 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 16 04:40:56.659000 systemd[1]: Stopped systemd-fsck-usr.service. Sep 16 04:40:56.659007 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:40:56.659013 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:40:56.659019 kernel: fuse: init (API version 7.41) Sep 16 04:40:56.659025 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:40:56.659032 kernel: loop: module loaded Sep 16 04:40:56.659037 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:40:56.659044 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 16 04:40:56.659050 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 16 04:40:56.659056 kernel: ACPI: bus type drm_connector registered Sep 16 04:40:56.659062 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:40:56.659069 systemd[1]: verity-setup.service: Deactivated successfully. Sep 16 04:40:56.659075 systemd[1]: Stopped verity-setup.service. Sep 16 04:40:56.659082 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 16 04:40:56.659100 systemd-journald[1382]: Collecting audit messages is disabled. Sep 16 04:40:56.659113 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 16 04:40:56.659120 systemd-journald[1382]: Journal started Sep 16 04:40:56.659135 systemd-journald[1382]: Runtime Journal (/run/log/journal/ab3e99e6d30645e596b5f0715697c520) is 8M, max 78.5M, 70.5M free. Sep 16 04:40:55.859913 systemd[1]: Queued start job for default target multi-user.target. Sep 16 04:40:55.866006 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 16 04:40:55.866354 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 16 04:40:55.866622 systemd[1]: systemd-journald.service: Consumed 2.208s CPU time. Sep 16 04:40:56.675084 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:40:56.675788 systemd[1]: Mounted media.mount - External Media Directory. Sep 16 04:40:56.680107 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 16 04:40:56.684704 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 16 04:40:56.689522 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 16 04:40:56.693886 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 16 04:40:56.698783 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:40:56.704059 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 16 04:40:56.704186 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 16 04:40:56.709642 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:40:56.710108 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:40:56.715036 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 04:40:56.715153 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 04:40:56.719377 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:40:56.719493 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:40:56.724417 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 16 04:40:56.724539 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 16 04:40:56.729216 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:40:56.729326 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:40:56.733798 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:40:56.738634 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:40:56.743851 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 16 04:40:56.749660 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 16 04:40:56.754663 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:40:56.767406 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:40:56.775707 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 16 04:40:56.786112 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 16 04:40:56.790594 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 16 04:40:56.790625 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:40:56.795038 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 16 04:40:56.803729 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 16 04:40:56.807681 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:40:56.856044 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 16 04:40:56.867103 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 16 04:40:56.871487 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:40:56.872118 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 16 04:40:56.876297 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:40:56.876901 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:40:56.882820 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 16 04:40:56.890725 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 16 04:40:56.895686 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 16 04:40:56.900314 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 16 04:40:56.908761 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 16 04:40:56.914299 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 16 04:40:56.920741 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 16 04:40:56.927478 systemd-journald[1382]: Time spent on flushing to /var/log/journal/ab3e99e6d30645e596b5f0715697c520 is 35.947ms for 943 entries. Sep 16 04:40:56.927478 systemd-journald[1382]: System Journal (/var/log/journal/ab3e99e6d30645e596b5f0715697c520) is 11.8M, max 2.6G, 2.6G free. Sep 16 04:40:57.008377 systemd-journald[1382]: Received client request to flush runtime journal. Sep 16 04:40:57.008409 kernel: loop0: detected capacity change from 0 to 207008 Sep 16 04:40:57.008421 systemd-journald[1382]: /var/log/journal/ab3e99e6d30645e596b5f0715697c520/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Sep 16 04:40:57.008437 systemd-journald[1382]: Rotating system journal. Sep 16 04:40:57.008451 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 16 04:40:57.009596 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 16 04:40:57.024291 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:40:57.049354 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 16 04:40:57.049935 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 16 04:40:57.142857 kernel: loop1: detected capacity change from 0 to 100632 Sep 16 04:40:57.592633 kernel: loop2: detected capacity change from 0 to 27936 Sep 16 04:40:57.636512 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 16 04:40:57.643188 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:40:57.794867 systemd-tmpfiles[1456]: ACLs are not supported, ignoring. Sep 16 04:40:57.794879 systemd-tmpfiles[1456]: ACLs are not supported, ignoring. Sep 16 04:40:57.798594 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:40:58.123630 kernel: loop3: detected capacity change from 0 to 119368 Sep 16 04:40:58.451080 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 16 04:40:58.457312 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:40:58.481296 systemd-udevd[1461]: Using default interface naming scheme 'v255'. Sep 16 04:40:58.678630 kernel: loop4: detected capacity change from 0 to 207008 Sep 16 04:40:58.693623 kernel: loop5: detected capacity change from 0 to 100632 Sep 16 04:40:58.705627 kernel: loop6: detected capacity change from 0 to 27936 Sep 16 04:40:58.715631 kernel: loop7: detected capacity change from 0 to 119368 Sep 16 04:40:58.723115 (sd-merge)[1463]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 16 04:40:58.723458 (sd-merge)[1463]: Merged extensions into '/usr'. Sep 16 04:40:58.727040 systemd[1]: Reload requested from client PID 1438 ('systemd-sysext') (unit systemd-sysext.service)... Sep 16 04:40:58.727140 systemd[1]: Reloading... Sep 16 04:40:58.783640 zram_generator::config[1497]: No configuration found. Sep 16 04:40:58.998884 systemd[1]: Reloading finished in 271 ms. Sep 16 04:40:59.015207 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 16 04:40:59.033643 systemd[1]: Starting ensure-sysext.service... Sep 16 04:40:59.037788 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:40:59.078156 systemd-tmpfiles[1545]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 16 04:40:59.078177 systemd-tmpfiles[1545]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 16 04:40:59.078361 systemd-tmpfiles[1545]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 16 04:40:59.078498 systemd-tmpfiles[1545]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 16 04:40:59.078939 systemd-tmpfiles[1545]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 16 04:40:59.079080 systemd-tmpfiles[1545]: ACLs are not supported, ignoring. Sep 16 04:40:59.079111 systemd-tmpfiles[1545]: ACLs are not supported, ignoring. Sep 16 04:40:59.101683 systemd[1]: Reload requested from client PID 1544 ('systemctl') (unit ensure-sysext.service)... Sep 16 04:40:59.101792 systemd[1]: Reloading... Sep 16 04:40:59.115396 systemd-tmpfiles[1545]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 04:40:59.115409 systemd-tmpfiles[1545]: Skipping /boot Sep 16 04:40:59.121683 systemd-tmpfiles[1545]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 04:40:59.121690 systemd-tmpfiles[1545]: Skipping /boot Sep 16 04:40:59.160688 zram_generator::config[1573]: No configuration found. Sep 16 04:40:59.311819 systemd[1]: Reloading finished in 209 ms. Sep 16 04:40:59.334574 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:40:59.347650 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:40:59.372327 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 16 04:40:59.374963 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:40:59.401627 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#209 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 16 04:40:59.447899 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 16 04:40:59.461625 kernel: mousedev: PS/2 mouse device common for all mice Sep 16 04:40:59.469629 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 16 04:40:59.479096 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:40:59.497916 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:40:59.505511 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 16 04:40:59.526352 kernel: hv_vmbus: registering driver hv_balloon Sep 16 04:40:59.526418 kernel: hv_vmbus: registering driver hyperv_fb Sep 16 04:40:59.526428 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 16 04:40:59.526477 kernel: hv_balloon: Memory hot add disabled on ARM64 Sep 16 04:40:59.529619 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 16 04:40:59.538426 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 16 04:40:59.537687 systemd[1]: Finished ensure-sysext.service. Sep 16 04:40:59.542034 kernel: Console: switching to colour dummy device 80x25 Sep 16 04:40:59.548639 kernel: Console: switching to colour frame buffer device 128x48 Sep 16 04:40:59.557226 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:40:59.559880 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:40:59.568858 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 04:40:59.575153 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:40:59.581458 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:40:59.586271 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:40:59.586407 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:40:59.586560 systemd[1]: Reached target time-set.target - System Time Set. Sep 16 04:40:59.592096 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 16 04:40:59.598878 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:40:59.605106 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:40:59.606643 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:40:59.612087 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 04:40:59.612240 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 04:40:59.618415 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:40:59.618548 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:40:59.624111 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:40:59.624225 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:40:59.635376 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:40:59.635516 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:40:59.638266 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:40:59.638428 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:40:59.644644 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:40:59.645931 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 16 04:40:59.660084 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:40:59.667383 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 16 04:40:59.673326 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:40:59.673544 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:40:59.684782 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:40:59.771511 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 16 04:40:59.777149 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 16 04:40:59.786763 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 16 04:40:59.794207 augenrules[1798]: No rules Sep 16 04:40:59.795058 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:40:59.795227 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:40:59.822184 systemd-resolved[1694]: Positive Trust Anchors: Sep 16 04:40:59.822197 systemd-resolved[1694]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:40:59.822217 systemd-resolved[1694]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:40:59.835709 systemd-resolved[1694]: Using system hostname 'ci-4459.0.0-n-404d4275b5'. Sep 16 04:40:59.846962 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:40:59.851238 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:40:59.870479 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 16 04:40:59.915637 kernel: MACsec IEEE 802.1AE Sep 16 04:40:59.957399 systemd-networkd[1693]: lo: Link UP Sep 16 04:40:59.957409 systemd-networkd[1693]: lo: Gained carrier Sep 16 04:40:59.958838 systemd-networkd[1693]: Enumeration completed Sep 16 04:40:59.959255 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:40:59.959758 systemd-networkd[1693]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:40:59.959766 systemd-networkd[1693]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:40:59.964535 systemd[1]: Reached target network.target - Network. Sep 16 04:40:59.969476 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 16 04:40:59.975267 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 16 04:41:00.024192 kernel: mlx5_core e790:00:02.0 enP59280s1: Link up Sep 16 04:41:00.024472 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 16 04:41:00.048621 kernel: hv_netvsc 000d3af6-385b-000d-3af6-385b000d3af6 eth0: Data path switched to VF: enP59280s1 Sep 16 04:41:00.050271 systemd-networkd[1693]: enP59280s1: Link UP Sep 16 04:41:00.051239 systemd-networkd[1693]: eth0: Link UP Sep 16 04:41:00.051245 systemd-networkd[1693]: eth0: Gained carrier Sep 16 04:41:00.051265 systemd-networkd[1693]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:41:00.053266 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 16 04:41:00.061136 systemd-networkd[1693]: enP59280s1: Gained carrier Sep 16 04:41:00.066644 systemd-networkd[1693]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 16 04:41:00.835121 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:41:01.488815 systemd-networkd[1693]: eth0: Gained IPv6LL Sep 16 04:41:01.492894 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 16 04:41:01.498230 systemd[1]: Reached target network-online.target - Network is Online. Sep 16 04:41:01.927014 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 16 04:41:01.932100 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 16 04:41:05.582045 ldconfig[1433]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 16 04:41:05.598244 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 16 04:41:05.605230 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 16 04:41:05.631997 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 16 04:41:05.636422 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:41:05.640626 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 16 04:41:05.645350 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 16 04:41:05.650299 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 16 04:41:05.654363 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 16 04:41:05.659031 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 16 04:41:05.663502 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 16 04:41:05.663525 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:41:05.666811 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:41:05.696828 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 16 04:41:05.702230 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 16 04:41:05.707272 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 16 04:41:05.712059 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 16 04:41:05.716920 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 16 04:41:05.722550 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 16 04:41:05.726828 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 16 04:41:05.731664 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 16 04:41:05.735704 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:41:05.739189 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:41:05.742673 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 16 04:41:05.742693 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 16 04:41:05.755248 systemd[1]: Starting chronyd.service - NTP client/server... Sep 16 04:41:05.771698 systemd[1]: Starting containerd.service - containerd container runtime... Sep 16 04:41:05.776348 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 16 04:41:05.782727 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 16 04:41:05.788449 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 16 04:41:05.796704 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 16 04:41:05.802541 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 16 04:41:05.806384 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 16 04:41:05.810507 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 16 04:41:05.815904 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 16 04:41:05.817731 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:41:05.831457 jq[1833]: false Sep 16 04:41:05.832836 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 16 04:41:05.838292 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 16 04:41:05.838862 KVP[1835]: KVP starting; pid is:1835 Sep 16 04:41:05.841307 chronyd[1825]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Sep 16 04:41:05.845203 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 16 04:41:05.850739 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 16 04:41:05.859697 kernel: hv_utils: KVP IC version 4.0 Sep 16 04:41:05.858318 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 16 04:41:05.859465 KVP[1835]: KVP LIC Version: 3.1 Sep 16 04:41:05.870659 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 16 04:41:05.877452 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 16 04:41:05.877864 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 16 04:41:05.879826 systemd[1]: Starting update-engine.service - Update Engine... Sep 16 04:41:05.886206 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 16 04:41:05.895646 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 16 04:41:05.903341 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 16 04:41:05.903423 jq[1851]: true Sep 16 04:41:05.904074 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 16 04:41:05.909966 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 16 04:41:05.910127 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 16 04:41:05.923869 systemd[1]: motdgen.service: Deactivated successfully. Sep 16 04:41:05.924020 extend-filesystems[1834]: Found /dev/sda6 Sep 16 04:41:05.927339 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 16 04:41:05.932874 chronyd[1825]: Timezone right/UTC failed leap second check, ignoring Sep 16 04:41:05.933033 chronyd[1825]: Loaded seccomp filter (level 2) Sep 16 04:41:05.935138 systemd[1]: Started chronyd.service - NTP client/server. Sep 16 04:41:05.941648 jq[1863]: true Sep 16 04:41:05.940785 (ntainerd)[1865]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 16 04:41:05.953533 extend-filesystems[1834]: Found /dev/sda9 Sep 16 04:41:05.958889 extend-filesystems[1834]: Checking size of /dev/sda9 Sep 16 04:41:05.962266 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 16 04:41:05.993040 systemd-logind[1847]: New seat seat0. Sep 16 04:41:05.995506 systemd-logind[1847]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Sep 16 04:41:06.005033 tar[1862]: linux-arm64/LICENSE Sep 16 04:41:06.005033 tar[1862]: linux-arm64/helm Sep 16 04:41:05.995758 systemd[1]: Started systemd-logind.service - User Login Management. Sep 16 04:41:06.021561 extend-filesystems[1834]: Old size kept for /dev/sda9 Sep 16 04:41:06.025525 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 16 04:41:06.025721 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 16 04:41:06.046390 update_engine[1849]: I20250916 04:41:06.046324 1849 main.cc:92] Flatcar Update Engine starting Sep 16 04:41:06.090159 bash[1897]: Updated "/home/core/.ssh/authorized_keys" Sep 16 04:41:06.085206 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 16 04:41:06.094281 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 16 04:41:06.270006 dbus-daemon[1828]: [system] SELinux support is enabled Sep 16 04:41:06.270965 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 16 04:41:06.278522 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 16 04:41:06.278862 update_engine[1849]: I20250916 04:41:06.278666 1849 update_check_scheduler.cc:74] Next update check in 10m38s Sep 16 04:41:06.279214 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 16 04:41:06.287908 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 16 04:41:06.287926 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 16 04:41:06.296356 systemd[1]: Started update-engine.service - Update Engine. Sep 16 04:41:06.296440 dbus-daemon[1828]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 16 04:41:06.307141 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 16 04:41:06.390284 coreos-metadata[1827]: Sep 16 04:41:06.390 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 16 04:41:06.395868 coreos-metadata[1827]: Sep 16 04:41:06.395 INFO Fetch successful Sep 16 04:41:06.396412 coreos-metadata[1827]: Sep 16 04:41:06.396 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 16 04:41:06.400688 coreos-metadata[1827]: Sep 16 04:41:06.400 INFO Fetch successful Sep 16 04:41:06.401038 coreos-metadata[1827]: Sep 16 04:41:06.400 INFO Fetching http://168.63.129.16/machine/2ace9d04-d50e-4949-ad1a-3953183ca9c6/5ae4c72f%2D9170%2D4bc0%2Da35a%2D72b2a4016807.%5Fci%2D4459.0.0%2Dn%2D404d4275b5?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 16 04:41:06.402145 coreos-metadata[1827]: Sep 16 04:41:06.402 INFO Fetch successful Sep 16 04:41:06.405089 coreos-metadata[1827]: Sep 16 04:41:06.405 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 16 04:41:06.410981 coreos-metadata[1827]: Sep 16 04:41:06.410 INFO Fetch successful Sep 16 04:41:06.450490 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 16 04:41:06.455918 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 16 04:41:06.458799 tar[1862]: linux-arm64/README.md Sep 16 04:41:06.471443 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 16 04:41:06.561761 sshd_keygen[1864]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 16 04:41:06.580307 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 16 04:41:06.587832 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 16 04:41:06.592833 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 16 04:41:06.609900 systemd[1]: issuegen.service: Deactivated successfully. Sep 16 04:41:06.610673 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 16 04:41:06.618286 locksmithd[1968]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 16 04:41:06.621363 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 16 04:41:06.634071 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 16 04:41:06.653323 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 16 04:41:06.660744 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 16 04:41:06.668835 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 16 04:41:06.674092 systemd[1]: Reached target getty.target - Login Prompts. Sep 16 04:41:06.765241 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:41:06.770121 (kubelet)[2014]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:41:06.835518 containerd[1865]: time="2025-09-16T04:41:06Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 16 04:41:06.837731 containerd[1865]: time="2025-09-16T04:41:06.836175176Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 16 04:41:06.843182 containerd[1865]: time="2025-09-16T04:41:06.843156536Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.064µs" Sep 16 04:41:06.843182 containerd[1865]: time="2025-09-16T04:41:06.843178368Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 16 04:41:06.843268 containerd[1865]: time="2025-09-16T04:41:06.843191736Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 16 04:41:06.843333 containerd[1865]: time="2025-09-16T04:41:06.843318512Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 16 04:41:06.843352 containerd[1865]: time="2025-09-16T04:41:06.843333192Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 16 04:41:06.843352 containerd[1865]: time="2025-09-16T04:41:06.843349136Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 04:41:06.843393 containerd[1865]: time="2025-09-16T04:41:06.843383056Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 04:41:06.843393 containerd[1865]: time="2025-09-16T04:41:06.843391320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 04:41:06.843549 containerd[1865]: time="2025-09-16T04:41:06.843534648Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 04:41:06.843563 containerd[1865]: time="2025-09-16T04:41:06.843547736Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 04:41:06.843563 containerd[1865]: time="2025-09-16T04:41:06.843555240Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 04:41:06.843563 containerd[1865]: time="2025-09-16T04:41:06.843560296Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 16 04:41:06.843638 containerd[1865]: time="2025-09-16T04:41:06.843628480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 16 04:41:06.843783 containerd[1865]: time="2025-09-16T04:41:06.843764064Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 04:41:06.843809 containerd[1865]: time="2025-09-16T04:41:06.843787728Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 04:41:06.843809 containerd[1865]: time="2025-09-16T04:41:06.843794944Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 16 04:41:06.843844 containerd[1865]: time="2025-09-16T04:41:06.843816040Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 16 04:41:06.843979 containerd[1865]: time="2025-09-16T04:41:06.843952752Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 16 04:41:06.844412 containerd[1865]: time="2025-09-16T04:41:06.844007536Z" level=info msg="metadata content store policy set" policy=shared Sep 16 04:41:06.859438 containerd[1865]: time="2025-09-16T04:41:06.859411616Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 16 04:41:06.859492 containerd[1865]: time="2025-09-16T04:41:06.859451400Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 16 04:41:06.859492 containerd[1865]: time="2025-09-16T04:41:06.859468216Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 16 04:41:06.859492 containerd[1865]: time="2025-09-16T04:41:06.859476640Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 16 04:41:06.859492 containerd[1865]: time="2025-09-16T04:41:06.859484352Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 16 04:41:06.859492 containerd[1865]: time="2025-09-16T04:41:06.859492320Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 16 04:41:06.859575 containerd[1865]: time="2025-09-16T04:41:06.859500136Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 16 04:41:06.859575 containerd[1865]: time="2025-09-16T04:41:06.859507752Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 16 04:41:06.859575 containerd[1865]: time="2025-09-16T04:41:06.859515056Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 16 04:41:06.859575 containerd[1865]: time="2025-09-16T04:41:06.859521320Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 16 04:41:06.859575 containerd[1865]: time="2025-09-16T04:41:06.859527184Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 16 04:41:06.859575 containerd[1865]: time="2025-09-16T04:41:06.859534896Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 16 04:41:06.859667 containerd[1865]: time="2025-09-16T04:41:06.859646720Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 16 04:41:06.859667 containerd[1865]: time="2025-09-16T04:41:06.859662456Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 16 04:41:06.859690 containerd[1865]: time="2025-09-16T04:41:06.859673920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 16 04:41:06.859690 containerd[1865]: time="2025-09-16T04:41:06.859684024Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 16 04:41:06.859711 containerd[1865]: time="2025-09-16T04:41:06.859690976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 16 04:41:06.859711 containerd[1865]: time="2025-09-16T04:41:06.859698112Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 16 04:41:06.859711 containerd[1865]: time="2025-09-16T04:41:06.859705136Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 16 04:41:06.859776 containerd[1865]: time="2025-09-16T04:41:06.859711536Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 16 04:41:06.859776 containerd[1865]: time="2025-09-16T04:41:06.859718792Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 16 04:41:06.859776 containerd[1865]: time="2025-09-16T04:41:06.859724968Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 16 04:41:06.859776 containerd[1865]: time="2025-09-16T04:41:06.859731720Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 16 04:41:06.859824 containerd[1865]: time="2025-09-16T04:41:06.859775880Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 16 04:41:06.859824 containerd[1865]: time="2025-09-16T04:41:06.859785632Z" level=info msg="Start snapshots syncer" Sep 16 04:41:06.859824 containerd[1865]: time="2025-09-16T04:41:06.859799696Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 16 04:41:06.859962 containerd[1865]: time="2025-09-16T04:41:06.859935880Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 16 04:41:06.860031 containerd[1865]: time="2025-09-16T04:41:06.859975256Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 16 04:41:06.860031 containerd[1865]: time="2025-09-16T04:41:06.860020128Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 16 04:41:06.860114 containerd[1865]: time="2025-09-16T04:41:06.860100520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 16 04:41:06.860134 containerd[1865]: time="2025-09-16T04:41:06.860119352Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 16 04:41:06.860134 containerd[1865]: time="2025-09-16T04:41:06.860127064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 16 04:41:06.860160 containerd[1865]: time="2025-09-16T04:41:06.860134432Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 16 04:41:06.860160 containerd[1865]: time="2025-09-16T04:41:06.860141928Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 16 04:41:06.860160 containerd[1865]: time="2025-09-16T04:41:06.860148776Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 16 04:41:06.860160 containerd[1865]: time="2025-09-16T04:41:06.860155256Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 16 04:41:06.860207 containerd[1865]: time="2025-09-16T04:41:06.860175400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 16 04:41:06.860207 containerd[1865]: time="2025-09-16T04:41:06.860183240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 16 04:41:06.860207 containerd[1865]: time="2025-09-16T04:41:06.860189776Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 16 04:41:06.860241 containerd[1865]: time="2025-09-16T04:41:06.860212504Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 04:41:06.860241 containerd[1865]: time="2025-09-16T04:41:06.860221088Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 04:41:06.860241 containerd[1865]: time="2025-09-16T04:41:06.860226584Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 04:41:06.860241 containerd[1865]: time="2025-09-16T04:41:06.860232240Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 04:41:06.860241 containerd[1865]: time="2025-09-16T04:41:06.860236840Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 16 04:41:06.860301 containerd[1865]: time="2025-09-16T04:41:06.860242448Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 16 04:41:06.860301 containerd[1865]: time="2025-09-16T04:41:06.860251864Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 16 04:41:06.860301 containerd[1865]: time="2025-09-16T04:41:06.860261984Z" level=info msg="runtime interface created" Sep 16 04:41:06.860301 containerd[1865]: time="2025-09-16T04:41:06.860264960Z" level=info msg="created NRI interface" Sep 16 04:41:06.860301 containerd[1865]: time="2025-09-16T04:41:06.860270032Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 16 04:41:06.860301 containerd[1865]: time="2025-09-16T04:41:06.860276824Z" level=info msg="Connect containerd service" Sep 16 04:41:06.860370 containerd[1865]: time="2025-09-16T04:41:06.860307176Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 16 04:41:06.861032 containerd[1865]: time="2025-09-16T04:41:06.860988328Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 04:41:07.095660 kubelet[2014]: E0916 04:41:07.095541 2014 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:41:07.097589 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:41:07.097716 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:41:07.098041 systemd[1]: kubelet.service: Consumed 538ms CPU time, 253.8M memory peak. Sep 16 04:41:07.209973 containerd[1865]: time="2025-09-16T04:41:07.209916616Z" level=info msg="Start subscribing containerd event" Sep 16 04:41:07.209973 containerd[1865]: time="2025-09-16T04:41:07.209982184Z" level=info msg="Start recovering state" Sep 16 04:41:07.210100 containerd[1865]: time="2025-09-16T04:41:07.210055272Z" level=info msg="Start event monitor" Sep 16 04:41:07.210100 containerd[1865]: time="2025-09-16T04:41:07.210063040Z" level=info msg="Start cni network conf syncer for default" Sep 16 04:41:07.210100 containerd[1865]: time="2025-09-16T04:41:07.210068768Z" level=info msg="Start streaming server" Sep 16 04:41:07.210100 containerd[1865]: time="2025-09-16T04:41:07.210075048Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 16 04:41:07.210100 containerd[1865]: time="2025-09-16T04:41:07.210080008Z" level=info msg="runtime interface starting up..." Sep 16 04:41:07.210100 containerd[1865]: time="2025-09-16T04:41:07.210083440Z" level=info msg="starting plugins..." Sep 16 04:41:07.210100 containerd[1865]: time="2025-09-16T04:41:07.210093976Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 16 04:41:07.210546 containerd[1865]: time="2025-09-16T04:41:07.210525656Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 16 04:41:07.210582 containerd[1865]: time="2025-09-16T04:41:07.210567032Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 16 04:41:07.210651 containerd[1865]: time="2025-09-16T04:41:07.210638624Z" level=info msg="containerd successfully booted in 0.375426s" Sep 16 04:41:07.211727 systemd[1]: Started containerd.service - containerd container runtime. Sep 16 04:41:07.218926 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 16 04:41:07.223723 systemd[1]: Startup finished in 1.586s (kernel) + 14.839s (initrd) + 16.924s (userspace) = 33.350s. Sep 16 04:41:08.008805 login[2008]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:41:08.009467 login[2007]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:41:08.014632 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 16 04:41:08.015370 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 16 04:41:08.022305 systemd-logind[1847]: New session 2 of user core. Sep 16 04:41:08.025196 systemd-logind[1847]: New session 1 of user core. Sep 16 04:41:08.050044 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 16 04:41:08.052025 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 16 04:41:08.073951 (systemd)[2041]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 16 04:41:08.075635 systemd-logind[1847]: New session c1 of user core. Sep 16 04:41:08.352136 systemd[2041]: Queued start job for default target default.target. Sep 16 04:41:08.364322 systemd[2041]: Created slice app.slice - User Application Slice. Sep 16 04:41:08.364344 systemd[2041]: Reached target paths.target - Paths. Sep 16 04:41:08.364452 systemd[2041]: Reached target timers.target - Timers. Sep 16 04:41:08.365454 systemd[2041]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 16 04:41:08.373143 systemd[2041]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 16 04:41:08.373256 systemd[2041]: Reached target sockets.target - Sockets. Sep 16 04:41:08.373353 systemd[2041]: Reached target basic.target - Basic System. Sep 16 04:41:08.373448 systemd[2041]: Reached target default.target - Main User Target. Sep 16 04:41:08.373470 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 16 04:41:08.373554 systemd[2041]: Startup finished in 292ms. Sep 16 04:41:08.385907 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 16 04:41:08.386427 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 16 04:41:08.840197 waagent[2005]: 2025-09-16T04:41:08.840077Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Sep 16 04:41:08.844192 waagent[2005]: 2025-09-16T04:41:08.844152Z INFO Daemon Daemon OS: flatcar 4459.0.0 Sep 16 04:41:08.847221 waagent[2005]: 2025-09-16T04:41:08.847196Z INFO Daemon Daemon Python: 3.11.13 Sep 16 04:41:08.851699 waagent[2005]: 2025-09-16T04:41:08.851663Z INFO Daemon Daemon Run daemon Sep 16 04:41:08.854812 waagent[2005]: 2025-09-16T04:41:08.854597Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.0.0' Sep 16 04:41:08.860724 waagent[2005]: 2025-09-16T04:41:08.860688Z INFO Daemon Daemon Using waagent for provisioning Sep 16 04:41:08.864215 waagent[2005]: 2025-09-16T04:41:08.864185Z INFO Daemon Daemon Activate resource disk Sep 16 04:41:08.867254 waagent[2005]: 2025-09-16T04:41:08.867230Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 16 04:41:08.874728 waagent[2005]: 2025-09-16T04:41:08.874694Z INFO Daemon Daemon Found device: None Sep 16 04:41:08.877687 waagent[2005]: 2025-09-16T04:41:08.877660Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 16 04:41:08.883228 waagent[2005]: 2025-09-16T04:41:08.883204Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 16 04:41:08.890920 waagent[2005]: 2025-09-16T04:41:08.890885Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 16 04:41:08.894817 waagent[2005]: 2025-09-16T04:41:08.894791Z INFO Daemon Daemon Running default provisioning handler Sep 16 04:41:08.902958 waagent[2005]: 2025-09-16T04:41:08.902913Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 16 04:41:08.912343 waagent[2005]: 2025-09-16T04:41:08.912305Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 16 04:41:08.919038 waagent[2005]: 2025-09-16T04:41:08.919007Z INFO Daemon Daemon cloud-init is enabled: False Sep 16 04:41:08.922700 waagent[2005]: 2025-09-16T04:41:08.922677Z INFO Daemon Daemon Copying ovf-env.xml Sep 16 04:41:09.094714 waagent[2005]: 2025-09-16T04:41:09.093899Z INFO Daemon Daemon Successfully mounted dvd Sep 16 04:41:09.135959 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 16 04:41:09.138664 waagent[2005]: 2025-09-16T04:41:09.137839Z INFO Daemon Daemon Detect protocol endpoint Sep 16 04:41:09.141235 waagent[2005]: 2025-09-16T04:41:09.141203Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 16 04:41:09.145144 waagent[2005]: 2025-09-16T04:41:09.145118Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 16 04:41:09.149682 waagent[2005]: 2025-09-16T04:41:09.149656Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 16 04:41:09.153311 waagent[2005]: 2025-09-16T04:41:09.153282Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 16 04:41:09.156766 waagent[2005]: 2025-09-16T04:41:09.156743Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 16 04:41:09.207964 waagent[2005]: 2025-09-16T04:41:09.207926Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 16 04:41:09.212450 waagent[2005]: 2025-09-16T04:41:09.212427Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 16 04:41:09.216039 waagent[2005]: 2025-09-16T04:41:09.216017Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 16 04:41:09.334689 waagent[2005]: 2025-09-16T04:41:09.334594Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 16 04:41:09.338995 waagent[2005]: 2025-09-16T04:41:09.338961Z INFO Daemon Daemon Forcing an update of the goal state. Sep 16 04:41:09.345281 waagent[2005]: 2025-09-16T04:41:09.345206Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 16 04:41:09.400321 waagent[2005]: 2025-09-16T04:41:09.400285Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 16 04:41:09.404518 waagent[2005]: 2025-09-16T04:41:09.404397Z INFO Daemon Sep 16 04:41:09.406373 waagent[2005]: 2025-09-16T04:41:09.406346Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 056eccc2-731a-4270-9745-954355fd591b eTag: 14870946685013223469 source: Fabric] Sep 16 04:41:09.414204 waagent[2005]: 2025-09-16T04:41:09.414177Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 16 04:41:09.418816 waagent[2005]: 2025-09-16T04:41:09.418785Z INFO Daemon Sep 16 04:41:09.420745 waagent[2005]: 2025-09-16T04:41:09.420719Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 16 04:41:09.428255 waagent[2005]: 2025-09-16T04:41:09.428226Z INFO Daemon Daemon Downloading artifacts profile blob Sep 16 04:41:09.482305 waagent[2005]: 2025-09-16T04:41:09.482252Z INFO Daemon Downloaded certificate {'thumbprint': 'D5EC5460ED9CA9A275A85A3B069344B3CAFD6D6A', 'hasPrivateKey': True} Sep 16 04:41:09.488672 waagent[2005]: 2025-09-16T04:41:09.488638Z INFO Daemon Fetch goal state completed Sep 16 04:41:09.496713 waagent[2005]: 2025-09-16T04:41:09.496687Z INFO Daemon Daemon Starting provisioning Sep 16 04:41:09.500170 waagent[2005]: 2025-09-16T04:41:09.500141Z INFO Daemon Daemon Handle ovf-env.xml. Sep 16 04:41:09.503157 waagent[2005]: 2025-09-16T04:41:09.503137Z INFO Daemon Daemon Set hostname [ci-4459.0.0-n-404d4275b5] Sep 16 04:41:09.563297 waagent[2005]: 2025-09-16T04:41:09.563249Z INFO Daemon Daemon Publish hostname [ci-4459.0.0-n-404d4275b5] Sep 16 04:41:09.567512 waagent[2005]: 2025-09-16T04:41:09.567478Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 16 04:41:09.571822 waagent[2005]: 2025-09-16T04:41:09.571792Z INFO Daemon Daemon Primary interface is [eth0] Sep 16 04:41:09.591624 systemd-networkd[1693]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:41:09.591628 systemd-networkd[1693]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:41:09.591675 systemd-networkd[1693]: eth0: DHCP lease lost Sep 16 04:41:09.592284 waagent[2005]: 2025-09-16T04:41:09.592205Z INFO Daemon Daemon Create user account if not exists Sep 16 04:41:09.596043 waagent[2005]: 2025-09-16T04:41:09.595979Z INFO Daemon Daemon User core already exists, skip useradd Sep 16 04:41:09.599994 waagent[2005]: 2025-09-16T04:41:09.599955Z INFO Daemon Daemon Configure sudoer Sep 16 04:41:09.607765 waagent[2005]: 2025-09-16T04:41:09.607723Z INFO Daemon Daemon Configure sshd Sep 16 04:41:09.614032 waagent[2005]: 2025-09-16T04:41:09.613996Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 16 04:41:09.622894 waagent[2005]: 2025-09-16T04:41:09.622860Z INFO Daemon Daemon Deploy ssh public key. Sep 16 04:41:09.626672 systemd-networkd[1693]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 16 04:41:10.769678 waagent[2005]: 2025-09-16T04:41:10.769631Z INFO Daemon Daemon Provisioning complete Sep 16 04:41:10.782099 waagent[2005]: 2025-09-16T04:41:10.782062Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 16 04:41:10.786294 waagent[2005]: 2025-09-16T04:41:10.786265Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 16 04:41:10.793034 waagent[2005]: 2025-09-16T04:41:10.793007Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Sep 16 04:41:10.888647 waagent[2091]: 2025-09-16T04:41:10.888195Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Sep 16 04:41:10.888647 waagent[2091]: 2025-09-16T04:41:10.888302Z INFO ExtHandler ExtHandler OS: flatcar 4459.0.0 Sep 16 04:41:10.888647 waagent[2091]: 2025-09-16T04:41:10.888340Z INFO ExtHandler ExtHandler Python: 3.11.13 Sep 16 04:41:10.888647 waagent[2091]: 2025-09-16T04:41:10.888374Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Sep 16 04:41:10.981102 waagent[2091]: 2025-09-16T04:41:10.981039Z INFO ExtHandler ExtHandler Distro: flatcar-4459.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Sep 16 04:41:10.981389 waagent[2091]: 2025-09-16T04:41:10.981362Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 16 04:41:10.981520 waagent[2091]: 2025-09-16T04:41:10.981496Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 16 04:41:10.986562 waagent[2091]: 2025-09-16T04:41:10.986515Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 16 04:41:10.990679 waagent[2091]: 2025-09-16T04:41:10.990648Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 16 04:41:10.991088 waagent[2091]: 2025-09-16T04:41:10.991058Z INFO ExtHandler Sep 16 04:41:10.991262 waagent[2091]: 2025-09-16T04:41:10.991236Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 579f9514-cdba-4c4c-a3eb-42112f4092ff eTag: 14870946685013223469 source: Fabric] Sep 16 04:41:10.991570 waagent[2091]: 2025-09-16T04:41:10.991541Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 16 04:41:10.992071 waagent[2091]: 2025-09-16T04:41:10.992039Z INFO ExtHandler Sep 16 04:41:10.992201 waagent[2091]: 2025-09-16T04:41:10.992177Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 16 04:41:10.997170 waagent[2091]: 2025-09-16T04:41:10.997142Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 16 04:41:11.044677 waagent[2091]: 2025-09-16T04:41:11.044455Z INFO ExtHandler Downloaded certificate {'thumbprint': 'D5EC5460ED9CA9A275A85A3B069344B3CAFD6D6A', 'hasPrivateKey': True} Sep 16 04:41:11.044848 waagent[2091]: 2025-09-16T04:41:11.044815Z INFO ExtHandler Fetch goal state completed Sep 16 04:41:11.054820 waagent[2091]: 2025-09-16T04:41:11.054779Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Sep 16 04:41:11.057853 waagent[2091]: 2025-09-16T04:41:11.057812Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2091 Sep 16 04:41:11.057946 waagent[2091]: 2025-09-16T04:41:11.057921Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 16 04:41:11.058172 waagent[2091]: 2025-09-16T04:41:11.058146Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Sep 16 04:41:11.059197 waagent[2091]: 2025-09-16T04:41:11.059165Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.0.0', '', 'Flatcar Container Linux by Kinvolk'] Sep 16 04:41:11.059495 waagent[2091]: 2025-09-16T04:41:11.059468Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Sep 16 04:41:11.059604 waagent[2091]: 2025-09-16T04:41:11.059583Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 16 04:41:11.060021 waagent[2091]: 2025-09-16T04:41:11.059994Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 16 04:41:11.209586 waagent[2091]: 2025-09-16T04:41:11.209549Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 16 04:41:11.209773 waagent[2091]: 2025-09-16T04:41:11.209746Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 16 04:41:11.214421 waagent[2091]: 2025-09-16T04:41:11.214048Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 16 04:41:11.220967 systemd[1]: Reload requested from client PID 2106 ('systemctl') (unit waagent.service)... Sep 16 04:41:11.220981 systemd[1]: Reloading... Sep 16 04:41:11.300636 zram_generator::config[2154]: No configuration found. Sep 16 04:41:11.427298 systemd[1]: Reloading finished in 206 ms. Sep 16 04:41:11.443884 waagent[2091]: 2025-09-16T04:41:11.443795Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 16 04:41:11.443948 waagent[2091]: 2025-09-16T04:41:11.443927Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 16 04:41:12.299151 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 16 04:41:12.299962 systemd[1]: Started sshd@0-10.200.20.12:22-10.200.16.10:38382.service - OpenSSH per-connection server daemon (10.200.16.10:38382). Sep 16 04:41:12.322191 waagent[2091]: 2025-09-16T04:41:12.322122Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 16 04:41:12.322452 waagent[2091]: 2025-09-16T04:41:12.322419Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 16 04:41:12.323054 waagent[2091]: 2025-09-16T04:41:12.323015Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 16 04:41:12.323351 waagent[2091]: 2025-09-16T04:41:12.323276Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 16 04:41:12.323622 waagent[2091]: 2025-09-16T04:41:12.323497Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 16 04:41:12.323622 waagent[2091]: 2025-09-16T04:41:12.323566Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 16 04:41:12.323910 waagent[2091]: 2025-09-16T04:41:12.323828Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 16 04:41:12.324013 waagent[2091]: 2025-09-16T04:41:12.323886Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 16 04:41:12.324118 waagent[2091]: 2025-09-16T04:41:12.324005Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 16 04:41:12.324356 waagent[2091]: 2025-09-16T04:41:12.324322Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 16 04:41:12.324471 waagent[2091]: 2025-09-16T04:41:12.324355Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 16 04:41:12.325000 waagent[2091]: 2025-09-16T04:41:12.324763Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 16 04:41:12.325000 waagent[2091]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 16 04:41:12.325000 waagent[2091]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 16 04:41:12.325000 waagent[2091]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 16 04:41:12.325000 waagent[2091]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 16 04:41:12.325000 waagent[2091]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 16 04:41:12.325000 waagent[2091]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 16 04:41:12.325218 waagent[2091]: 2025-09-16T04:41:12.325191Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 16 04:41:12.327863 waagent[2091]: 2025-09-16T04:41:12.327827Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 16 04:41:12.327926 waagent[2091]: 2025-09-16T04:41:12.327905Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 16 04:41:12.328138 waagent[2091]: 2025-09-16T04:41:12.328044Z INFO EnvHandler ExtHandler Configure routes Sep 16 04:41:12.328192 waagent[2091]: 2025-09-16T04:41:12.328165Z INFO EnvHandler ExtHandler Gateway:None Sep 16 04:41:12.328440 waagent[2091]: 2025-09-16T04:41:12.328414Z INFO EnvHandler ExtHandler Routes:None Sep 16 04:41:12.329474 waagent[2091]: 2025-09-16T04:41:12.329434Z INFO ExtHandler ExtHandler Sep 16 04:41:12.329525 waagent[2091]: 2025-09-16T04:41:12.329504Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 7b6285fa-1f41-476f-9868-1e723e06aa99 correlation b9aeaa98-e934-4a36-907f-1ccd13638fc4 created: 2025-09-16T04:39:55.678589Z] Sep 16 04:41:12.330235 waagent[2091]: 2025-09-16T04:41:12.330074Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 16 04:41:12.331209 waagent[2091]: 2025-09-16T04:41:12.331178Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Sep 16 04:41:12.384668 waagent[2091]: 2025-09-16T04:41:12.384605Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Sep 16 04:41:12.384668 waagent[2091]: Try `iptables -h' or 'iptables --help' for more information.) Sep 16 04:41:12.384973 waagent[2091]: 2025-09-16T04:41:12.384942Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: F5D0C9C6-053D-4347-AEB6-DA145C4CE675;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Sep 16 04:41:12.439635 waagent[2091]: 2025-09-16T04:41:12.439506Z INFO MonitorHandler ExtHandler Network interfaces: Sep 16 04:41:12.439635 waagent[2091]: Executing ['ip', '-a', '-o', 'link']: Sep 16 04:41:12.439635 waagent[2091]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 16 04:41:12.439635 waagent[2091]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f6:38:5b brd ff:ff:ff:ff:ff:ff Sep 16 04:41:12.439635 waagent[2091]: 3: enP59280s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f6:38:5b brd ff:ff:ff:ff:ff:ff\ altname enP59280p0s2 Sep 16 04:41:12.439635 waagent[2091]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 16 04:41:12.439635 waagent[2091]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 16 04:41:12.439635 waagent[2091]: 2: eth0 inet 10.200.20.12/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 16 04:41:12.439635 waagent[2091]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 16 04:41:12.439635 waagent[2091]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 16 04:41:12.439635 waagent[2091]: 2: eth0 inet6 fe80::20d:3aff:fef6:385b/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 16 04:41:12.536210 waagent[2091]: 2025-09-16T04:41:12.536160Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Sep 16 04:41:12.536210 waagent[2091]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 16 04:41:12.536210 waagent[2091]: pkts bytes target prot opt in out source destination Sep 16 04:41:12.536210 waagent[2091]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 16 04:41:12.536210 waagent[2091]: pkts bytes target prot opt in out source destination Sep 16 04:41:12.536210 waagent[2091]: Chain OUTPUT (policy ACCEPT 1 packets, 52 bytes) Sep 16 04:41:12.536210 waagent[2091]: pkts bytes target prot opt in out source destination Sep 16 04:41:12.536210 waagent[2091]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 16 04:41:12.536210 waagent[2091]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 16 04:41:12.536210 waagent[2091]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 16 04:41:12.538405 waagent[2091]: 2025-09-16T04:41:12.538366Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 16 04:41:12.538405 waagent[2091]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 16 04:41:12.538405 waagent[2091]: pkts bytes target prot opt in out source destination Sep 16 04:41:12.538405 waagent[2091]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 16 04:41:12.538405 waagent[2091]: pkts bytes target prot opt in out source destination Sep 16 04:41:12.538405 waagent[2091]: Chain OUTPUT (policy ACCEPT 1 packets, 52 bytes) Sep 16 04:41:12.538405 waagent[2091]: pkts bytes target prot opt in out source destination Sep 16 04:41:12.538405 waagent[2091]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 16 04:41:12.538405 waagent[2091]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 16 04:41:12.538405 waagent[2091]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 16 04:41:12.538579 waagent[2091]: 2025-09-16T04:41:12.538556Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 16 04:41:12.886599 sshd[2204]: Accepted publickey for core from 10.200.16.10 port 38382 ssh2: RSA SHA256:Zp261ZQ+rITQnwTNSqi9daI+o6M3rFPHdoLGEYx3TkE Sep 16 04:41:12.887566 sshd-session[2204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:41:12.891165 systemd-logind[1847]: New session 3 of user core. Sep 16 04:41:12.901707 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 16 04:41:13.276963 systemd[1]: Started sshd@1-10.200.20.12:22-10.200.16.10:38394.service - OpenSSH per-connection server daemon (10.200.16.10:38394). Sep 16 04:41:13.731279 sshd[2239]: Accepted publickey for core from 10.200.16.10 port 38394 ssh2: RSA SHA256:Zp261ZQ+rITQnwTNSqi9daI+o6M3rFPHdoLGEYx3TkE Sep 16 04:41:13.732233 sshd-session[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:41:13.736355 systemd-logind[1847]: New session 4 of user core. Sep 16 04:41:13.742726 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 16 04:41:14.055632 sshd[2242]: Connection closed by 10.200.16.10 port 38394 Sep 16 04:41:14.056114 sshd-session[2239]: pam_unix(sshd:session): session closed for user core Sep 16 04:41:14.058944 systemd[1]: sshd@1-10.200.20.12:22-10.200.16.10:38394.service: Deactivated successfully. Sep 16 04:41:14.060330 systemd[1]: session-4.scope: Deactivated successfully. Sep 16 04:41:14.060884 systemd-logind[1847]: Session 4 logged out. Waiting for processes to exit. Sep 16 04:41:14.062146 systemd-logind[1847]: Removed session 4. Sep 16 04:41:14.148365 systemd[1]: Started sshd@2-10.200.20.12:22-10.200.16.10:38406.service - OpenSSH per-connection server daemon (10.200.16.10:38406). Sep 16 04:41:14.567388 sshd[2248]: Accepted publickey for core from 10.200.16.10 port 38406 ssh2: RSA SHA256:Zp261ZQ+rITQnwTNSqi9daI+o6M3rFPHdoLGEYx3TkE Sep 16 04:41:14.568369 sshd-session[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:41:14.571785 systemd-logind[1847]: New session 5 of user core. Sep 16 04:41:14.579726 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 16 04:41:14.876686 sshd[2251]: Connection closed by 10.200.16.10 port 38406 Sep 16 04:41:14.873753 sshd-session[2248]: pam_unix(sshd:session): session closed for user core Sep 16 04:41:14.877360 systemd[1]: sshd@2-10.200.20.12:22-10.200.16.10:38406.service: Deactivated successfully. Sep 16 04:41:14.878931 systemd[1]: session-5.scope: Deactivated successfully. Sep 16 04:41:14.880131 systemd-logind[1847]: Session 5 logged out. Waiting for processes to exit. Sep 16 04:41:14.880922 systemd-logind[1847]: Removed session 5. Sep 16 04:41:14.947815 systemd[1]: Started sshd@3-10.200.20.12:22-10.200.16.10:38412.service - OpenSSH per-connection server daemon (10.200.16.10:38412). Sep 16 04:41:15.362200 sshd[2257]: Accepted publickey for core from 10.200.16.10 port 38412 ssh2: RSA SHA256:Zp261ZQ+rITQnwTNSqi9daI+o6M3rFPHdoLGEYx3TkE Sep 16 04:41:15.363266 sshd-session[2257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:41:15.366687 systemd-logind[1847]: New session 6 of user core. Sep 16 04:41:15.373726 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 16 04:41:15.671625 sshd[2260]: Connection closed by 10.200.16.10 port 38412 Sep 16 04:41:15.672235 sshd-session[2257]: pam_unix(sshd:session): session closed for user core Sep 16 04:41:15.675858 systemd[1]: sshd@3-10.200.20.12:22-10.200.16.10:38412.service: Deactivated successfully. Sep 16 04:41:15.677351 systemd[1]: session-6.scope: Deactivated successfully. Sep 16 04:41:15.678145 systemd-logind[1847]: Session 6 logged out. Waiting for processes to exit. Sep 16 04:41:15.679488 systemd-logind[1847]: Removed session 6. Sep 16 04:41:15.760667 systemd[1]: Started sshd@4-10.200.20.12:22-10.200.16.10:38428.service - OpenSSH per-connection server daemon (10.200.16.10:38428). Sep 16 04:41:16.212761 sshd[2266]: Accepted publickey for core from 10.200.16.10 port 38428 ssh2: RSA SHA256:Zp261ZQ+rITQnwTNSqi9daI+o6M3rFPHdoLGEYx3TkE Sep 16 04:41:16.213769 sshd-session[2266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:41:16.217297 systemd-logind[1847]: New session 7 of user core. Sep 16 04:41:16.224727 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 16 04:41:16.627484 sudo[2270]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 16 04:41:16.627716 sudo[2270]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:41:16.653861 sudo[2270]: pam_unix(sudo:session): session closed for user root Sep 16 04:41:16.724027 sshd[2269]: Connection closed by 10.200.16.10 port 38428 Sep 16 04:41:16.724640 sshd-session[2266]: pam_unix(sshd:session): session closed for user core Sep 16 04:41:16.727560 systemd[1]: sshd@4-10.200.20.12:22-10.200.16.10:38428.service: Deactivated successfully. Sep 16 04:41:16.729115 systemd[1]: session-7.scope: Deactivated successfully. Sep 16 04:41:16.730730 systemd-logind[1847]: Session 7 logged out. Waiting for processes to exit. Sep 16 04:41:16.731892 systemd-logind[1847]: Removed session 7. Sep 16 04:41:16.811966 systemd[1]: Started sshd@5-10.200.20.12:22-10.200.16.10:38442.service - OpenSSH per-connection server daemon (10.200.16.10:38442). Sep 16 04:41:17.225475 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 16 04:41:17.227116 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:41:17.315376 sshd[2276]: Accepted publickey for core from 10.200.16.10 port 38442 ssh2: RSA SHA256:Zp261ZQ+rITQnwTNSqi9daI+o6M3rFPHdoLGEYx3TkE Sep 16 04:41:17.316350 sshd-session[2276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:41:17.321084 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:41:17.322223 systemd-logind[1847]: New session 8 of user core. Sep 16 04:41:17.323326 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 16 04:41:17.323328 (kubelet)[2286]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:41:17.445446 kubelet[2286]: E0916 04:41:17.445377 2286 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:41:17.448083 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:41:17.448189 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:41:17.449730 systemd[1]: kubelet.service: Consumed 109ms CPU time, 108.2M memory peak. Sep 16 04:41:17.591463 sudo[2296]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 16 04:41:17.591687 sudo[2296]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:41:17.775483 sudo[2296]: pam_unix(sudo:session): session closed for user root Sep 16 04:41:17.779557 sudo[2295]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 16 04:41:17.780062 sudo[2295]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:41:17.788003 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:41:17.813146 augenrules[2318]: No rules Sep 16 04:41:17.814143 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:41:17.814433 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:41:17.815781 sudo[2295]: pam_unix(sudo:session): session closed for user root Sep 16 04:41:17.903708 sshd[2288]: Connection closed by 10.200.16.10 port 38442 Sep 16 04:41:17.904162 sshd-session[2276]: pam_unix(sshd:session): session closed for user core Sep 16 04:41:17.907600 systemd[1]: sshd@5-10.200.20.12:22-10.200.16.10:38442.service: Deactivated successfully. Sep 16 04:41:17.909068 systemd[1]: session-8.scope: Deactivated successfully. Sep 16 04:41:17.909853 systemd-logind[1847]: Session 8 logged out. Waiting for processes to exit. Sep 16 04:41:17.911068 systemd-logind[1847]: Removed session 8. Sep 16 04:41:17.996525 systemd[1]: Started sshd@6-10.200.20.12:22-10.200.16.10:38444.service - OpenSSH per-connection server daemon (10.200.16.10:38444). Sep 16 04:41:18.415391 sshd[2327]: Accepted publickey for core from 10.200.16.10 port 38444 ssh2: RSA SHA256:Zp261ZQ+rITQnwTNSqi9daI+o6M3rFPHdoLGEYx3TkE Sep 16 04:41:18.416327 sshd-session[2327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:41:18.419716 systemd-logind[1847]: New session 9 of user core. Sep 16 04:41:18.426741 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 16 04:41:18.650818 sudo[2331]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 16 04:41:18.651018 sudo[2331]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:41:20.696245 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 16 04:41:20.707858 (dockerd)[2348]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 16 04:41:21.569357 dockerd[2348]: time="2025-09-16T04:41:21.569301544Z" level=info msg="Starting up" Sep 16 04:41:21.569950 dockerd[2348]: time="2025-09-16T04:41:21.569929608Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 16 04:41:21.578012 dockerd[2348]: time="2025-09-16T04:41:21.577981952Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 16 04:41:21.719277 dockerd[2348]: time="2025-09-16T04:41:21.719199560Z" level=info msg="Loading containers: start." Sep 16 04:41:21.799625 kernel: Initializing XFRM netlink socket Sep 16 04:41:22.242288 systemd-networkd[1693]: docker0: Link UP Sep 16 04:41:22.258992 dockerd[2348]: time="2025-09-16T04:41:22.258955280Z" level=info msg="Loading containers: done." Sep 16 04:41:22.268074 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1304572382-merged.mount: Deactivated successfully. Sep 16 04:41:22.290967 dockerd[2348]: time="2025-09-16T04:41:22.290933352Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 16 04:41:22.291081 dockerd[2348]: time="2025-09-16T04:41:22.291004920Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 16 04:41:22.291081 dockerd[2348]: time="2025-09-16T04:41:22.291068824Z" level=info msg="Initializing buildkit" Sep 16 04:41:22.340547 dockerd[2348]: time="2025-09-16T04:41:22.340479896Z" level=info msg="Completed buildkit initialization" Sep 16 04:41:22.345634 dockerd[2348]: time="2025-09-16T04:41:22.345590776Z" level=info msg="Daemon has completed initialization" Sep 16 04:41:22.346112 dockerd[2348]: time="2025-09-16T04:41:22.345832544Z" level=info msg="API listen on /run/docker.sock" Sep 16 04:41:22.346029 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 16 04:41:23.354893 containerd[1865]: time="2025-09-16T04:41:23.354823584Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 16 04:41:24.271232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3854611157.mount: Deactivated successfully. Sep 16 04:41:25.265079 containerd[1865]: time="2025-09-16T04:41:25.265022504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:25.268032 containerd[1865]: time="2025-09-16T04:41:25.267882008Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363685" Sep 16 04:41:25.270978 containerd[1865]: time="2025-09-16T04:41:25.270954360Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:25.275105 containerd[1865]: time="2025-09-16T04:41:25.275073560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:25.275695 containerd[1865]: time="2025-09-16T04:41:25.275670944Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 1.920782296s" Sep 16 04:41:25.275783 containerd[1865]: time="2025-09-16T04:41:25.275768520Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Sep 16 04:41:25.276364 containerd[1865]: time="2025-09-16T04:41:25.276283272Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 16 04:41:26.432011 containerd[1865]: time="2025-09-16T04:41:26.431957456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:26.435127 containerd[1865]: time="2025-09-16T04:41:26.435094640Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531200" Sep 16 04:41:26.438337 containerd[1865]: time="2025-09-16T04:41:26.438301056Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:26.443047 containerd[1865]: time="2025-09-16T04:41:26.443000048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:26.443557 containerd[1865]: time="2025-09-16T04:41:26.443529632Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.16722364s" Sep 16 04:41:26.443557 containerd[1865]: time="2025-09-16T04:41:26.443557656Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Sep 16 04:41:26.444009 containerd[1865]: time="2025-09-16T04:41:26.443955664Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 16 04:41:27.450599 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 16 04:41:27.452705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:41:27.561210 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:41:27.566863 (kubelet)[2633]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:41:27.692895 kubelet[2633]: E0916 04:41:27.692840 2633 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:41:27.694979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:41:27.695086 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:41:27.695356 systemd[1]: kubelet.service: Consumed 106ms CPU time, 105.1M memory peak. Sep 16 04:41:28.031567 containerd[1865]: time="2025-09-16T04:41:28.031516656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:28.035549 containerd[1865]: time="2025-09-16T04:41:28.035380920Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484324" Sep 16 04:41:28.039669 containerd[1865]: time="2025-09-16T04:41:28.039647808Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:28.045377 containerd[1865]: time="2025-09-16T04:41:28.045333920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:28.045923 containerd[1865]: time="2025-09-16T04:41:28.045811024Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.601832664s" Sep 16 04:41:28.045923 containerd[1865]: time="2025-09-16T04:41:28.045837272Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Sep 16 04:41:28.046420 containerd[1865]: time="2025-09-16T04:41:28.046396840Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 16 04:41:29.714045 chronyd[1825]: Selected source PHC0 Sep 16 04:41:29.943539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount717557663.mount: Deactivated successfully. Sep 16 04:41:30.212122 containerd[1865]: time="2025-09-16T04:41:30.212077037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:30.215723 containerd[1865]: time="2025-09-16T04:41:30.215691974Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417817" Sep 16 04:41:30.218570 containerd[1865]: time="2025-09-16T04:41:30.218529618Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:30.224316 containerd[1865]: time="2025-09-16T04:41:30.224277616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:30.224726 containerd[1865]: time="2025-09-16T04:41:30.224515092Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 2.178095498s" Sep 16 04:41:30.224726 containerd[1865]: time="2025-09-16T04:41:30.224542789Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Sep 16 04:41:30.225388 containerd[1865]: time="2025-09-16T04:41:30.225240018Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 16 04:41:30.925931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3546318962.mount: Deactivated successfully. Sep 16 04:41:31.863732 containerd[1865]: time="2025-09-16T04:41:31.863680271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:31.868033 containerd[1865]: time="2025-09-16T04:41:31.867995631Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 16 04:41:31.871419 containerd[1865]: time="2025-09-16T04:41:31.871379183Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:31.875702 containerd[1865]: time="2025-09-16T04:41:31.875671071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:31.876487 containerd[1865]: time="2025-09-16T04:41:31.876348063Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.651083555s" Sep 16 04:41:31.876487 containerd[1865]: time="2025-09-16T04:41:31.876371871Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 16 04:41:31.876755 containerd[1865]: time="2025-09-16T04:41:31.876736711Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 16 04:41:32.442466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1965046404.mount: Deactivated successfully. Sep 16 04:41:32.462307 containerd[1865]: time="2025-09-16T04:41:32.461839623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:41:32.465359 containerd[1865]: time="2025-09-16T04:41:32.465337847Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 16 04:41:32.473887 containerd[1865]: time="2025-09-16T04:41:32.473866127Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:41:32.478709 containerd[1865]: time="2025-09-16T04:41:32.478689287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:41:32.478999 containerd[1865]: time="2025-09-16T04:41:32.478973727Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 602.212824ms" Sep 16 04:41:32.479043 containerd[1865]: time="2025-09-16T04:41:32.479001207Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 16 04:41:32.479430 containerd[1865]: time="2025-09-16T04:41:32.479412343Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 16 04:41:33.120987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3787637001.mount: Deactivated successfully. Sep 16 04:41:35.211653 containerd[1865]: time="2025-09-16T04:41:35.211105871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:35.214109 containerd[1865]: time="2025-09-16T04:41:35.213929831Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Sep 16 04:41:35.218365 containerd[1865]: time="2025-09-16T04:41:35.218340279Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:35.226739 containerd[1865]: time="2025-09-16T04:41:35.226718047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:35.227438 containerd[1865]: time="2025-09-16T04:41:35.227413095Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.747914744s" Sep 16 04:41:35.227438 containerd[1865]: time="2025-09-16T04:41:35.227439055Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 16 04:41:37.668431 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:41:37.668856 systemd[1]: kubelet.service: Consumed 106ms CPU time, 105.1M memory peak. Sep 16 04:41:37.670569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:41:37.689081 systemd[1]: Reload requested from client PID 2786 ('systemctl') (unit session-9.scope)... Sep 16 04:41:37.689164 systemd[1]: Reloading... Sep 16 04:41:37.786670 zram_generator::config[2839]: No configuration found. Sep 16 04:41:37.931432 systemd[1]: Reloading finished in 241 ms. Sep 16 04:41:37.967953 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 16 04:41:37.968008 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 16 04:41:37.968182 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:41:37.968219 systemd[1]: kubelet.service: Consumed 71ms CPU time, 94.9M memory peak. Sep 16 04:41:37.969231 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:41:38.196460 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:41:38.203819 (kubelet)[2900]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 04:41:38.402635 kubelet[2900]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:41:38.402635 kubelet[2900]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 16 04:41:38.402635 kubelet[2900]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:41:38.402635 kubelet[2900]: I0916 04:41:38.401872 2900 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 04:41:38.817247 kubelet[2900]: I0916 04:41:38.816856 2900 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 16 04:41:38.817247 kubelet[2900]: I0916 04:41:38.816888 2900 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 04:41:38.817247 kubelet[2900]: I0916 04:41:38.817191 2900 server.go:954] "Client rotation is on, will bootstrap in background" Sep 16 04:41:38.841766 kubelet[2900]: I0916 04:41:38.841643 2900 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:41:38.842133 kubelet[2900]: E0916 04:41:38.842104 2900 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:41:38.846879 kubelet[2900]: I0916 04:41:38.846857 2900 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 04:41:38.849855 kubelet[2900]: I0916 04:41:38.849838 2900 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 04:41:38.851021 kubelet[2900]: I0916 04:41:38.850991 2900 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 04:41:38.851144 kubelet[2900]: I0916 04:41:38.851024 2900 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.0.0-n-404d4275b5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 04:41:38.851228 kubelet[2900]: I0916 04:41:38.851149 2900 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 04:41:38.851228 kubelet[2900]: I0916 04:41:38.851156 2900 container_manager_linux.go:304] "Creating device plugin manager" Sep 16 04:41:38.851271 kubelet[2900]: I0916 04:41:38.851259 2900 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:41:38.854579 kubelet[2900]: I0916 04:41:38.854305 2900 kubelet.go:446] "Attempting to sync node with API server" Sep 16 04:41:38.854579 kubelet[2900]: I0916 04:41:38.854329 2900 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 04:41:38.854579 kubelet[2900]: I0916 04:41:38.854352 2900 kubelet.go:352] "Adding apiserver pod source" Sep 16 04:41:38.854579 kubelet[2900]: I0916 04:41:38.854364 2900 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 04:41:38.858004 kubelet[2900]: W0916 04:41:38.857974 2900 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.0.0-n-404d4275b5&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Sep 16 04:41:38.858101 kubelet[2900]: E0916 04:41:38.858086 2900 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.0.0-n-404d4275b5&limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:41:38.858205 kubelet[2900]: I0916 04:41:38.858194 2900 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 04:41:38.858549 kubelet[2900]: I0916 04:41:38.858534 2900 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 16 04:41:38.858660 kubelet[2900]: W0916 04:41:38.858651 2900 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 16 04:41:38.859115 kubelet[2900]: I0916 04:41:38.859098 2900 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 16 04:41:38.859205 kubelet[2900]: I0916 04:41:38.859197 2900 server.go:1287] "Started kubelet" Sep 16 04:41:38.861379 kubelet[2900]: I0916 04:41:38.861361 2900 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 04:41:38.861513 kubelet[2900]: W0916 04:41:38.861480 2900 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Sep 16 04:41:38.861549 kubelet[2900]: E0916 04:41:38.861513 2900 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:41:38.861573 kubelet[2900]: I0916 04:41:38.861559 2900 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 04:41:38.861649 kubelet[2900]: I0916 04:41:38.861625 2900 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 04:41:38.862163 kubelet[2900]: I0916 04:41:38.862145 2900 server.go:479] "Adding debug handlers to kubelet server" Sep 16 04:41:38.865483 kubelet[2900]: I0916 04:41:38.864631 2900 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 16 04:41:38.865483 kubelet[2900]: E0916 04:41:38.864789 2900 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.0.0-n-404d4275b5\" not found" Sep 16 04:41:38.865483 kubelet[2900]: I0916 04:41:38.864810 2900 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 04:41:38.865483 kubelet[2900]: I0916 04:41:38.864994 2900 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 04:41:38.865584 kubelet[2900]: I0916 04:41:38.865576 2900 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 16 04:41:38.865648 kubelet[2900]: I0916 04:41:38.865636 2900 reconciler.go:26] "Reconciler: start to sync state" Sep 16 04:41:38.866438 kubelet[2900]: W0916 04:41:38.866400 2900 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Sep 16 04:41:38.866438 kubelet[2900]: E0916 04:41:38.866437 2900 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:41:38.866512 kubelet[2900]: E0916 04:41:38.866481 2900 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.0.0-n-404d4275b5?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="200ms" Sep 16 04:41:38.872443 kubelet[2900]: E0916 04:41:38.872362 2900 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.0.0-n-404d4275b5.1865a997c4749486 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.0.0-n-404d4275b5,UID:ci-4459.0.0-n-404d4275b5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.0.0-n-404d4275b5,},FirstTimestamp:2025-09-16 04:41:38.859177094 +0000 UTC m=+0.652655810,LastTimestamp:2025-09-16 04:41:38.859177094 +0000 UTC m=+0.652655810,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.0.0-n-404d4275b5,}" Sep 16 04:41:38.872819 kubelet[2900]: I0916 04:41:38.872800 2900 factory.go:221] Registration of the containerd container factory successfully Sep 16 04:41:38.872891 kubelet[2900]: I0916 04:41:38.872884 2900 factory.go:221] Registration of the systemd container factory successfully Sep 16 04:41:38.872993 kubelet[2900]: I0916 04:41:38.872978 2900 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 04:41:38.893129 kubelet[2900]: E0916 04:41:38.893108 2900 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 04:41:38.897561 kubelet[2900]: I0916 04:41:38.897545 2900 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 16 04:41:38.897561 kubelet[2900]: I0916 04:41:38.897557 2900 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 16 04:41:38.897653 kubelet[2900]: I0916 04:41:38.897571 2900 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:41:38.903637 kubelet[2900]: I0916 04:41:38.903618 2900 policy_none.go:49] "None policy: Start" Sep 16 04:41:38.903637 kubelet[2900]: I0916 04:41:38.903636 2900 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 16 04:41:38.903637 kubelet[2900]: I0916 04:41:38.903645 2900 state_mem.go:35] "Initializing new in-memory state store" Sep 16 04:41:38.913773 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 16 04:41:38.926385 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 16 04:41:38.928835 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 16 04:41:38.930209 kubelet[2900]: W0916 04:41:38.930166 2900 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/cpuset.cpus.effective: no such device Sep 16 04:41:38.939269 kubelet[2900]: I0916 04:41:38.939251 2900 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 16 04:41:38.939661 kubelet[2900]: I0916 04:41:38.939406 2900 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 04:41:38.939661 kubelet[2900]: I0916 04:41:38.939418 2900 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 04:41:38.939661 kubelet[2900]: I0916 04:41:38.939594 2900 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 04:41:38.941085 kubelet[2900]: E0916 04:41:38.941058 2900 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 16 04:41:38.941218 kubelet[2900]: E0916 04:41:38.941200 2900 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.0.0-n-404d4275b5\" not found" Sep 16 04:41:39.010309 kubelet[2900]: I0916 04:41:39.010271 2900 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 16 04:41:39.011229 kubelet[2900]: I0916 04:41:39.011212 2900 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 16 04:41:39.011523 kubelet[2900]: I0916 04:41:39.011294 2900 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 16 04:41:39.011523 kubelet[2900]: I0916 04:41:39.011319 2900 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 16 04:41:39.011523 kubelet[2900]: I0916 04:41:39.011327 2900 kubelet.go:2382] "Starting kubelet main sync loop" Sep 16 04:41:39.011523 kubelet[2900]: E0916 04:41:39.011362 2900 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 16 04:41:39.012385 kubelet[2900]: W0916 04:41:39.012356 2900 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Sep 16 04:41:39.012429 kubelet[2900]: E0916 04:41:39.012387 2900 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:41:39.041573 kubelet[2900]: I0916 04:41:39.041533 2900 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.0.0-n-404d4275b5" Sep 16 04:41:39.042013 kubelet[2900]: E0916 04:41:39.041993 2900 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4459.0.0-n-404d4275b5" Sep 16 04:41:39.067496 kubelet[2900]: E0916 04:41:39.067396 2900 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.0.0-n-404d4275b5?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="400ms" Sep 16 04:41:39.120139 systemd[1]: Created slice kubepods-burstable-pod31f0dd937061361b7c132e1a21f68015.slice - libcontainer container kubepods-burstable-pod31f0dd937061361b7c132e1a21f68015.slice. Sep 16 04:41:39.130326 kubelet[2900]: E0916 04:41:39.130119 2900 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-n-404d4275b5\" not found" node="ci-4459.0.0-n-404d4275b5" Sep 16 04:41:39.132577 systemd[1]: Created slice kubepods-burstable-pod6bb9e7e24d443a18fce3222a9a3e6eb9.slice - libcontainer container kubepods-burstable-pod6bb9e7e24d443a18fce3222a9a3e6eb9.slice. Sep 16 04:41:39.141630 kubelet[2900]: E0916 04:41:39.141493 2900 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-n-404d4275b5\" not found" node="ci-4459.0.0-n-404d4275b5" Sep 16 04:41:39.143686 systemd[1]: Created slice kubepods-burstable-pod367300998050b6329fb25476494f3597.slice - libcontainer container kubepods-burstable-pod367300998050b6329fb25476494f3597.slice. Sep 16 04:41:39.145048 kubelet[2900]: E0916 04:41:39.145026 2900 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-n-404d4275b5\" not found" node="ci-4459.0.0-n-404d4275b5" Sep 16 04:41:39.167487 kubelet[2900]: I0916 04:41:39.167330 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/31f0dd937061361b7c132e1a21f68015-k8s-certs\") pod \"kube-apiserver-ci-4459.0.0-n-404d4275b5\" (UID: \"31f0dd937061361b7c132e1a21f68015\") " pod="kube-system/kube-apiserver-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:39.167487 kubelet[2900]: I0916 04:41:39.167360 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/31f0dd937061361b7c132e1a21f68015-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.0.0-n-404d4275b5\" (UID: \"31f0dd937061361b7c132e1a21f68015\") " pod="kube-system/kube-apiserver-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:39.167487 kubelet[2900]: I0916 04:41:39.167374 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6bb9e7e24d443a18fce3222a9a3e6eb9-ca-certs\") pod \"kube-controller-manager-ci-4459.0.0-n-404d4275b5\" (UID: \"6bb9e7e24d443a18fce3222a9a3e6eb9\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:39.167487 kubelet[2900]: I0916 04:41:39.167385 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6bb9e7e24d443a18fce3222a9a3e6eb9-k8s-certs\") pod \"kube-controller-manager-ci-4459.0.0-n-404d4275b5\" (UID: \"6bb9e7e24d443a18fce3222a9a3e6eb9\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:39.167487 kubelet[2900]: I0916 04:41:39.167396 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6bb9e7e24d443a18fce3222a9a3e6eb9-kubeconfig\") pod \"kube-controller-manager-ci-4459.0.0-n-404d4275b5\" (UID: \"6bb9e7e24d443a18fce3222a9a3e6eb9\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:39.167673 kubelet[2900]: I0916 04:41:39.167404 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/31f0dd937061361b7c132e1a21f68015-ca-certs\") pod \"kube-apiserver-ci-4459.0.0-n-404d4275b5\" (UID: \"31f0dd937061361b7c132e1a21f68015\") " pod="kube-system/kube-apiserver-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:39.167673 kubelet[2900]: I0916 04:41:39.167414 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6bb9e7e24d443a18fce3222a9a3e6eb9-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.0.0-n-404d4275b5\" (UID: \"6bb9e7e24d443a18fce3222a9a3e6eb9\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:39.167673 kubelet[2900]: I0916 04:41:39.167422 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6bb9e7e24d443a18fce3222a9a3e6eb9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.0.0-n-404d4275b5\" (UID: \"6bb9e7e24d443a18fce3222a9a3e6eb9\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:39.167673 kubelet[2900]: I0916 04:41:39.167432 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/367300998050b6329fb25476494f3597-kubeconfig\") pod \"kube-scheduler-ci-4459.0.0-n-404d4275b5\" (UID: \"367300998050b6329fb25476494f3597\") " pod="kube-system/kube-scheduler-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:39.244054 kubelet[2900]: I0916 04:41:39.244022 2900 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.0.0-n-404d4275b5" Sep 16 04:41:39.244370 kubelet[2900]: E0916 04:41:39.244329 2900 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4459.0.0-n-404d4275b5" Sep 16 04:41:39.431706 containerd[1865]: time="2025-09-16T04:41:39.431578645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.0.0-n-404d4275b5,Uid:31f0dd937061361b7c132e1a21f68015,Namespace:kube-system,Attempt:0,}" Sep 16 04:41:39.443280 containerd[1865]: time="2025-09-16T04:41:39.443143498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.0.0-n-404d4275b5,Uid:6bb9e7e24d443a18fce3222a9a3e6eb9,Namespace:kube-system,Attempt:0,}" Sep 16 04:41:39.446064 containerd[1865]: time="2025-09-16T04:41:39.445965733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.0.0-n-404d4275b5,Uid:367300998050b6329fb25476494f3597,Namespace:kube-system,Attempt:0,}" Sep 16 04:41:39.467839 kubelet[2900]: E0916 04:41:39.467809 2900 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.0.0-n-404d4275b5?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="800ms" Sep 16 04:41:39.646973 kubelet[2900]: I0916 04:41:39.646936 2900 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.0.0-n-404d4275b5" Sep 16 04:41:39.650343 kubelet[2900]: E0916 04:41:39.647454 2900 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4459.0.0-n-404d4275b5" Sep 16 04:41:39.700647 kubelet[2900]: W0916 04:41:39.700394 2900 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Sep 16 04:41:39.700831 kubelet[2900]: E0916 04:41:39.700797 2900 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:41:40.072156 kubelet[2900]: W0916 04:41:40.071974 2900 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.0.0-n-404d4275b5&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Sep 16 04:41:40.072156 kubelet[2900]: E0916 04:41:40.072048 2900 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.0.0-n-404d4275b5&limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:41:40.222070 kubelet[2900]: W0916 04:41:40.221996 2900 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Sep 16 04:41:40.222070 kubelet[2900]: E0916 04:41:40.222041 2900 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:41:40.268884 kubelet[2900]: E0916 04:41:40.268843 2900 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.0.0-n-404d4275b5?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="1.6s" Sep 16 04:41:40.449485 kubelet[2900]: I0916 04:41:40.449359 2900 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.0.0-n-404d4275b5" Sep 16 04:41:40.450009 kubelet[2900]: E0916 04:41:40.449985 2900 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4459.0.0-n-404d4275b5" Sep 16 04:41:40.518817 kubelet[2900]: W0916 04:41:40.518765 2900 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Sep 16 04:41:40.519178 kubelet[2900]: E0916 04:41:40.518804 2900 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:41:40.681471 containerd[1865]: time="2025-09-16T04:41:40.681432018Z" level=info msg="connecting to shim 1d8a4a9021c2b92007d4066fb7d522db3d7e76a79d0311a74577c8041db2ad11" address="unix:///run/containerd/s/1845baabdcd763f43341b682ae980c78168c0a11aa388a57db021aef884cadb2" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:41:40.698747 systemd[1]: Started cri-containerd-1d8a4a9021c2b92007d4066fb7d522db3d7e76a79d0311a74577c8041db2ad11.scope - libcontainer container 1d8a4a9021c2b92007d4066fb7d522db3d7e76a79d0311a74577c8041db2ad11. Sep 16 04:41:40.746905 containerd[1865]: time="2025-09-16T04:41:40.746851065Z" level=info msg="connecting to shim c44073a1630ac8bdb960e844b3a6ce802dd00b53b01e7bab8f95f004446e6117" address="unix:///run/containerd/s/2c3f6cc684f7325d10b32bdd8c0549b3ab66db89d78fe13c5e1ab75424eb3f64" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:41:40.752633 containerd[1865]: time="2025-09-16T04:41:40.752594914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.0.0-n-404d4275b5,Uid:6bb9e7e24d443a18fce3222a9a3e6eb9,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d8a4a9021c2b92007d4066fb7d522db3d7e76a79d0311a74577c8041db2ad11\"" Sep 16 04:41:40.756736 containerd[1865]: time="2025-09-16T04:41:40.756694099Z" level=info msg="CreateContainer within sandbox \"1d8a4a9021c2b92007d4066fb7d522db3d7e76a79d0311a74577c8041db2ad11\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 16 04:41:40.757439 containerd[1865]: time="2025-09-16T04:41:40.757360438Z" level=info msg="connecting to shim 6b64667f916a88146bb5ba508ca709777116bf0f95cf7e89eb1f74cda343f91a" address="unix:///run/containerd/s/a623fe4413795e0182625888cad051e9001fb149036b3a36d42cf9c25100f605" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:41:40.774887 systemd[1]: Started cri-containerd-c44073a1630ac8bdb960e844b3a6ce802dd00b53b01e7bab8f95f004446e6117.scope - libcontainer container c44073a1630ac8bdb960e844b3a6ce802dd00b53b01e7bab8f95f004446e6117. Sep 16 04:41:40.778588 systemd[1]: Started cri-containerd-6b64667f916a88146bb5ba508ca709777116bf0f95cf7e89eb1f74cda343f91a.scope - libcontainer container 6b64667f916a88146bb5ba508ca709777116bf0f95cf7e89eb1f74cda343f91a. Sep 16 04:41:40.784770 containerd[1865]: time="2025-09-16T04:41:40.784715340Z" level=info msg="Container 3232835561ee4873ee8411906cb33fa322bb059514983e61be24cd0fa343b830: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:41:40.811822 containerd[1865]: time="2025-09-16T04:41:40.811652934Z" level=info msg="CreateContainer within sandbox \"1d8a4a9021c2b92007d4066fb7d522db3d7e76a79d0311a74577c8041db2ad11\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3232835561ee4873ee8411906cb33fa322bb059514983e61be24cd0fa343b830\"" Sep 16 04:41:40.813628 containerd[1865]: time="2025-09-16T04:41:40.812698636Z" level=info msg="StartContainer for \"3232835561ee4873ee8411906cb33fa322bb059514983e61be24cd0fa343b830\"" Sep 16 04:41:40.813628 containerd[1865]: time="2025-09-16T04:41:40.813416729Z" level=info msg="connecting to shim 3232835561ee4873ee8411906cb33fa322bb059514983e61be24cd0fa343b830" address="unix:///run/containerd/s/1845baabdcd763f43341b682ae980c78168c0a11aa388a57db021aef884cadb2" protocol=ttrpc version=3 Sep 16 04:41:40.818139 containerd[1865]: time="2025-09-16T04:41:40.818116204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.0.0-n-404d4275b5,Uid:367300998050b6329fb25476494f3597,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b64667f916a88146bb5ba508ca709777116bf0f95cf7e89eb1f74cda343f91a\"" Sep 16 04:41:40.820857 containerd[1865]: time="2025-09-16T04:41:40.820838020Z" level=info msg="CreateContainer within sandbox \"6b64667f916a88146bb5ba508ca709777116bf0f95cf7e89eb1f74cda343f91a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 16 04:41:40.823912 containerd[1865]: time="2025-09-16T04:41:40.823883574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.0.0-n-404d4275b5,Uid:31f0dd937061361b7c132e1a21f68015,Namespace:kube-system,Attempt:0,} returns sandbox id \"c44073a1630ac8bdb960e844b3a6ce802dd00b53b01e7bab8f95f004446e6117\"" Sep 16 04:41:40.828378 containerd[1865]: time="2025-09-16T04:41:40.828351521Z" level=info msg="CreateContainer within sandbox \"c44073a1630ac8bdb960e844b3a6ce802dd00b53b01e7bab8f95f004446e6117\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 16 04:41:40.830731 systemd[1]: Started cri-containerd-3232835561ee4873ee8411906cb33fa322bb059514983e61be24cd0fa343b830.scope - libcontainer container 3232835561ee4873ee8411906cb33fa322bb059514983e61be24cd0fa343b830. Sep 16 04:41:40.846936 containerd[1865]: time="2025-09-16T04:41:40.846877467Z" level=info msg="Container 551c037dd196d71cd1ca68b6d9624d7742398dfd9c48be9486d0ffa480f5668b: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:41:40.867559 containerd[1865]: time="2025-09-16T04:41:40.867534244Z" level=info msg="Container dbfd760dc34e9d239d6912cc7bdd6cf79b73f1b92814227bd9bcb38f8f51a325: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:41:40.870277 containerd[1865]: time="2025-09-16T04:41:40.870251900Z" level=info msg="CreateContainer within sandbox \"6b64667f916a88146bb5ba508ca709777116bf0f95cf7e89eb1f74cda343f91a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"551c037dd196d71cd1ca68b6d9624d7742398dfd9c48be9486d0ffa480f5668b\"" Sep 16 04:41:40.874345 containerd[1865]: time="2025-09-16T04:41:40.874125678Z" level=info msg="StartContainer for \"551c037dd196d71cd1ca68b6d9624d7742398dfd9c48be9486d0ffa480f5668b\"" Sep 16 04:41:40.875375 containerd[1865]: time="2025-09-16T04:41:40.875357146Z" level=info msg="StartContainer for \"3232835561ee4873ee8411906cb33fa322bb059514983e61be24cd0fa343b830\" returns successfully" Sep 16 04:41:40.878198 containerd[1865]: time="2025-09-16T04:41:40.878177645Z" level=info msg="connecting to shim 551c037dd196d71cd1ca68b6d9624d7742398dfd9c48be9486d0ffa480f5668b" address="unix:///run/containerd/s/a623fe4413795e0182625888cad051e9001fb149036b3a36d42cf9c25100f605" protocol=ttrpc version=3 Sep 16 04:41:40.891518 containerd[1865]: time="2025-09-16T04:41:40.891492182Z" level=info msg="CreateContainer within sandbox \"c44073a1630ac8bdb960e844b3a6ce802dd00b53b01e7bab8f95f004446e6117\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dbfd760dc34e9d239d6912cc7bdd6cf79b73f1b92814227bd9bcb38f8f51a325\"" Sep 16 04:41:40.892263 containerd[1865]: time="2025-09-16T04:41:40.892200218Z" level=info msg="StartContainer for \"dbfd760dc34e9d239d6912cc7bdd6cf79b73f1b92814227bd9bcb38f8f51a325\"" Sep 16 04:41:40.894291 containerd[1865]: time="2025-09-16T04:41:40.894270951Z" level=info msg="connecting to shim dbfd760dc34e9d239d6912cc7bdd6cf79b73f1b92814227bd9bcb38f8f51a325" address="unix:///run/containerd/s/2c3f6cc684f7325d10b32bdd8c0549b3ab66db89d78fe13c5e1ab75424eb3f64" protocol=ttrpc version=3 Sep 16 04:41:40.898331 systemd[1]: Started cri-containerd-551c037dd196d71cd1ca68b6d9624d7742398dfd9c48be9486d0ffa480f5668b.scope - libcontainer container 551c037dd196d71cd1ca68b6d9624d7742398dfd9c48be9486d0ffa480f5668b. Sep 16 04:41:40.918719 systemd[1]: Started cri-containerd-dbfd760dc34e9d239d6912cc7bdd6cf79b73f1b92814227bd9bcb38f8f51a325.scope - libcontainer container dbfd760dc34e9d239d6912cc7bdd6cf79b73f1b92814227bd9bcb38f8f51a325. Sep 16 04:41:40.935299 kubelet[2900]: E0916 04:41:40.935266 2900 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:41:40.976820 containerd[1865]: time="2025-09-16T04:41:40.976758902Z" level=info msg="StartContainer for \"dbfd760dc34e9d239d6912cc7bdd6cf79b73f1b92814227bd9bcb38f8f51a325\" returns successfully" Sep 16 04:41:40.978049 containerd[1865]: time="2025-09-16T04:41:40.977965657Z" level=info msg="StartContainer for \"551c037dd196d71cd1ca68b6d9624d7742398dfd9c48be9486d0ffa480f5668b\" returns successfully" Sep 16 04:41:41.022862 kubelet[2900]: E0916 04:41:41.022778 2900 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-n-404d4275b5\" not found" node="ci-4459.0.0-n-404d4275b5" Sep 16 04:41:41.026174 kubelet[2900]: E0916 04:41:41.026071 2900 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-n-404d4275b5\" not found" node="ci-4459.0.0-n-404d4275b5" Sep 16 04:41:41.028407 kubelet[2900]: E0916 04:41:41.028012 2900 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-n-404d4275b5\" not found" node="ci-4459.0.0-n-404d4275b5" Sep 16 04:41:42.031392 kubelet[2900]: E0916 04:41:42.031243 2900 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-n-404d4275b5\" not found" node="ci-4459.0.0-n-404d4275b5" Sep 16 04:41:42.032071 kubelet[2900]: E0916 04:41:42.032047 2900 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-n-404d4275b5\" not found" node="ci-4459.0.0-n-404d4275b5" Sep 16 04:41:42.051327 kubelet[2900]: I0916 04:41:42.051310 2900 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.0.0-n-404d4275b5" Sep 16 04:41:42.116338 kubelet[2900]: E0916 04:41:42.116287 2900 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.0.0-n-404d4275b5\" not found" node="ci-4459.0.0-n-404d4275b5" Sep 16 04:41:42.184943 kubelet[2900]: I0916 04:41:42.184910 2900 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.0.0-n-404d4275b5" Sep 16 04:41:42.184943 kubelet[2900]: E0916 04:41:42.184942 2900 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459.0.0-n-404d4275b5\": node \"ci-4459.0.0-n-404d4275b5\" not found" Sep 16 04:41:42.204699 kubelet[2900]: E0916 04:41:42.204055 2900 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.0.0-n-404d4275b5\" not found" Sep 16 04:41:42.305809 kubelet[2900]: E0916 04:41:42.305686 2900 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.0.0-n-404d4275b5\" not found" Sep 16 04:41:42.406560 kubelet[2900]: E0916 04:41:42.406519 2900 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.0.0-n-404d4275b5\" not found" Sep 16 04:41:42.566282 kubelet[2900]: I0916 04:41:42.565985 2900 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:42.570840 kubelet[2900]: E0916 04:41:42.570810 2900 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.0.0-n-404d4275b5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:42.570840 kubelet[2900]: I0916 04:41:42.570835 2900 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:42.572135 kubelet[2900]: E0916 04:41:42.572109 2900 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.0.0-n-404d4275b5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:42.572135 kubelet[2900]: I0916 04:41:42.572131 2900 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:42.573258 kubelet[2900]: E0916 04:41:42.573225 2900 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.0.0-n-404d4275b5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:42.863201 kubelet[2900]: I0916 04:41:42.862921 2900 apiserver.go:52] "Watching apiserver" Sep 16 04:41:42.865692 kubelet[2900]: I0916 04:41:42.865660 2900 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 16 04:41:43.031428 kubelet[2900]: I0916 04:41:43.031327 2900 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:43.033221 kubelet[2900]: E0916 04:41:43.033189 2900 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.0.0-n-404d4275b5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:43.877676 kubelet[2900]: I0916 04:41:43.877313 2900 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:43.888184 kubelet[2900]: W0916 04:41:43.888120 2900 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 16 04:41:44.320703 systemd[1]: Reload requested from client PID 3169 ('systemctl') (unit session-9.scope)... Sep 16 04:41:44.320717 systemd[1]: Reloading... Sep 16 04:41:44.388677 zram_generator::config[3213]: No configuration found. Sep 16 04:41:44.571387 systemd[1]: Reloading finished in 250 ms. Sep 16 04:41:44.593795 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:41:44.606410 systemd[1]: kubelet.service: Deactivated successfully. Sep 16 04:41:44.607649 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:41:44.607703 systemd[1]: kubelet.service: Consumed 721ms CPU time, 130.1M memory peak. Sep 16 04:41:44.610713 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:41:44.746613 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:41:44.752835 (kubelet)[3280]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 04:41:44.783645 kubelet[3280]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:41:44.783645 kubelet[3280]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 16 04:41:44.783645 kubelet[3280]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:41:44.783929 kubelet[3280]: I0916 04:41:44.783751 3280 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 04:41:44.787985 kubelet[3280]: I0916 04:41:44.787940 3280 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 16 04:41:44.787985 kubelet[3280]: I0916 04:41:44.787959 3280 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 04:41:44.788373 kubelet[3280]: I0916 04:41:44.788347 3280 server.go:954] "Client rotation is on, will bootstrap in background" Sep 16 04:41:44.789322 kubelet[3280]: I0916 04:41:44.789307 3280 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 16 04:41:44.791623 kubelet[3280]: I0916 04:41:44.791139 3280 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:41:44.796635 kubelet[3280]: I0916 04:41:44.795962 3280 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 04:41:44.798808 kubelet[3280]: I0916 04:41:44.798789 3280 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 04:41:44.799067 kubelet[3280]: I0916 04:41:44.799045 3280 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 04:41:44.799236 kubelet[3280]: I0916 04:41:44.799121 3280 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.0.0-n-404d4275b5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 04:41:44.799341 kubelet[3280]: I0916 04:41:44.799331 3280 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 04:41:44.799393 kubelet[3280]: I0916 04:41:44.799386 3280 container_manager_linux.go:304] "Creating device plugin manager" Sep 16 04:41:44.799467 kubelet[3280]: I0916 04:41:44.799459 3280 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:41:44.799638 kubelet[3280]: I0916 04:41:44.799630 3280 kubelet.go:446] "Attempting to sync node with API server" Sep 16 04:41:44.800010 kubelet[3280]: I0916 04:41:44.799998 3280 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 04:41:44.800090 kubelet[3280]: I0916 04:41:44.800084 3280 kubelet.go:352] "Adding apiserver pod source" Sep 16 04:41:44.800135 kubelet[3280]: I0916 04:41:44.800129 3280 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 04:41:44.802821 kubelet[3280]: I0916 04:41:44.802797 3280 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 04:41:44.803216 kubelet[3280]: I0916 04:41:44.803200 3280 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 16 04:41:44.804927 kubelet[3280]: I0916 04:41:44.804908 3280 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 16 04:41:44.805029 kubelet[3280]: I0916 04:41:44.805019 3280 server.go:1287] "Started kubelet" Sep 16 04:41:44.806296 kubelet[3280]: I0916 04:41:44.806276 3280 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 04:41:44.814289 kubelet[3280]: I0916 04:41:44.814262 3280 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 04:41:44.815188 kubelet[3280]: I0916 04:41:44.815170 3280 server.go:479] "Adding debug handlers to kubelet server" Sep 16 04:41:44.816482 kubelet[3280]: I0916 04:41:44.816447 3280 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 04:41:44.817807 kubelet[3280]: I0916 04:41:44.817791 3280 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 04:41:44.817904 kubelet[3280]: I0916 04:41:44.816621 3280 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 16 04:41:44.818081 kubelet[3280]: I0916 04:41:44.818065 3280 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 04:41:44.818189 kubelet[3280]: I0916 04:41:44.816596 3280 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 16 04:41:44.818530 kubelet[3280]: I0916 04:41:44.818516 3280 reconciler.go:26] "Reconciler: start to sync state" Sep 16 04:41:44.820476 kubelet[3280]: I0916 04:41:44.820453 3280 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 16 04:41:44.822658 kubelet[3280]: I0916 04:41:44.822584 3280 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 16 04:41:44.822736 kubelet[3280]: I0916 04:41:44.822726 3280 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 16 04:41:44.822788 kubelet[3280]: I0916 04:41:44.822780 3280 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 16 04:41:44.822829 kubelet[3280]: I0916 04:41:44.822820 3280 kubelet.go:2382] "Starting kubelet main sync loop" Sep 16 04:41:44.822896 kubelet[3280]: E0916 04:41:44.822884 3280 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 04:41:44.824576 kubelet[3280]: I0916 04:41:44.824556 3280 factory.go:221] Registration of the systemd container factory successfully Sep 16 04:41:44.824807 kubelet[3280]: I0916 04:41:44.824789 3280 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 04:41:44.829116 kubelet[3280]: E0916 04:41:44.829091 3280 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 04:41:44.829396 kubelet[3280]: I0916 04:41:44.829382 3280 factory.go:221] Registration of the containerd container factory successfully Sep 16 04:41:44.878396 kubelet[3280]: I0916 04:41:44.878367 3280 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 16 04:41:44.878396 kubelet[3280]: I0916 04:41:44.878384 3280 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 16 04:41:44.878396 kubelet[3280]: I0916 04:41:44.878402 3280 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:41:44.878539 kubelet[3280]: I0916 04:41:44.878514 3280 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 16 04:41:44.878539 kubelet[3280]: I0916 04:41:44.878522 3280 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 16 04:41:44.878539 kubelet[3280]: I0916 04:41:44.878535 3280 policy_none.go:49] "None policy: Start" Sep 16 04:41:44.878590 kubelet[3280]: I0916 04:41:44.878542 3280 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 16 04:41:44.878590 kubelet[3280]: I0916 04:41:44.878550 3280 state_mem.go:35] "Initializing new in-memory state store" Sep 16 04:41:44.878722 kubelet[3280]: I0916 04:41:44.878709 3280 state_mem.go:75] "Updated machine memory state" Sep 16 04:41:44.882346 kubelet[3280]: I0916 04:41:44.882274 3280 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 16 04:41:44.882429 kubelet[3280]: I0916 04:41:44.882415 3280 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 04:41:44.882455 kubelet[3280]: I0916 04:41:44.882429 3280 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 04:41:44.883114 kubelet[3280]: I0916 04:41:44.883086 3280 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 04:41:44.883913 kubelet[3280]: E0916 04:41:44.883891 3280 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 16 04:41:44.924306 kubelet[3280]: I0916 04:41:44.924285 3280 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:44.924671 kubelet[3280]: I0916 04:41:44.924651 3280 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:44.925267 kubelet[3280]: I0916 04:41:44.925250 3280 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:44.931351 kubelet[3280]: W0916 04:41:44.931333 3280 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 16 04:41:44.936417 kubelet[3280]: W0916 04:41:44.936393 3280 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 16 04:41:44.938174 kubelet[3280]: W0916 04:41:44.938141 3280 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 16 04:41:44.938334 kubelet[3280]: E0916 04:41:44.938233 3280 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.0.0-n-404d4275b5\" already exists" pod="kube-system/kube-controller-manager-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:44.988908 kubelet[3280]: I0916 04:41:44.988882 3280 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.0.0-n-404d4275b5" Sep 16 04:41:45.009721 kubelet[3280]: I0916 04:41:45.009696 3280 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.0.0-n-404d4275b5" Sep 16 04:41:45.009905 kubelet[3280]: I0916 04:41:45.009775 3280 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.0.0-n-404d4275b5" Sep 16 04:41:45.020549 kubelet[3280]: I0916 04:41:45.020522 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6bb9e7e24d443a18fce3222a9a3e6eb9-k8s-certs\") pod \"kube-controller-manager-ci-4459.0.0-n-404d4275b5\" (UID: \"6bb9e7e24d443a18fce3222a9a3e6eb9\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:45.020815 kubelet[3280]: I0916 04:41:45.020695 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6bb9e7e24d443a18fce3222a9a3e6eb9-kubeconfig\") pod \"kube-controller-manager-ci-4459.0.0-n-404d4275b5\" (UID: \"6bb9e7e24d443a18fce3222a9a3e6eb9\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:45.020815 kubelet[3280]: I0916 04:41:45.020716 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/367300998050b6329fb25476494f3597-kubeconfig\") pod \"kube-scheduler-ci-4459.0.0-n-404d4275b5\" (UID: \"367300998050b6329fb25476494f3597\") " pod="kube-system/kube-scheduler-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:45.020815 kubelet[3280]: I0916 04:41:45.020730 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6bb9e7e24d443a18fce3222a9a3e6eb9-ca-certs\") pod \"kube-controller-manager-ci-4459.0.0-n-404d4275b5\" (UID: \"6bb9e7e24d443a18fce3222a9a3e6eb9\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:45.020815 kubelet[3280]: I0916 04:41:45.020741 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6bb9e7e24d443a18fce3222a9a3e6eb9-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.0.0-n-404d4275b5\" (UID: \"6bb9e7e24d443a18fce3222a9a3e6eb9\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:45.020815 kubelet[3280]: I0916 04:41:45.020752 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6bb9e7e24d443a18fce3222a9a3e6eb9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.0.0-n-404d4275b5\" (UID: \"6bb9e7e24d443a18fce3222a9a3e6eb9\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:45.020985 kubelet[3280]: I0916 04:41:45.020762 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/31f0dd937061361b7c132e1a21f68015-ca-certs\") pod \"kube-apiserver-ci-4459.0.0-n-404d4275b5\" (UID: \"31f0dd937061361b7c132e1a21f68015\") " pod="kube-system/kube-apiserver-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:45.020985 kubelet[3280]: I0916 04:41:45.020772 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/31f0dd937061361b7c132e1a21f68015-k8s-certs\") pod \"kube-apiserver-ci-4459.0.0-n-404d4275b5\" (UID: \"31f0dd937061361b7c132e1a21f68015\") " pod="kube-system/kube-apiserver-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:45.020985 kubelet[3280]: I0916 04:41:45.020781 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/31f0dd937061361b7c132e1a21f68015-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.0.0-n-404d4275b5\" (UID: \"31f0dd937061361b7c132e1a21f68015\") " pod="kube-system/kube-apiserver-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:45.802260 kubelet[3280]: I0916 04:41:45.802218 3280 apiserver.go:52] "Watching apiserver" Sep 16 04:41:45.819215 kubelet[3280]: I0916 04:41:45.818783 3280 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 16 04:41:45.867677 kubelet[3280]: I0916 04:41:45.867090 3280 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:45.867775 kubelet[3280]: I0916 04:41:45.867747 3280 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:45.880385 kubelet[3280]: W0916 04:41:45.880361 3280 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 16 04:41:45.880477 kubelet[3280]: E0916 04:41:45.880403 3280 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.0.0-n-404d4275b5\" already exists" pod="kube-system/kube-scheduler-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:45.885811 kubelet[3280]: W0916 04:41:45.885621 3280 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 16 04:41:45.885811 kubelet[3280]: E0916 04:41:45.885654 3280 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.0.0-n-404d4275b5\" already exists" pod="kube-system/kube-apiserver-ci-4459.0.0-n-404d4275b5" Sep 16 04:41:45.896831 kubelet[3280]: I0916 04:41:45.896621 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.0.0-n-404d4275b5" podStartSLOduration=1.8965948940000001 podStartE2EDuration="1.896594894s" podCreationTimestamp="2025-09-16 04:41:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:41:45.886846913 +0000 UTC m=+1.131032393" watchObservedRunningTime="2025-09-16 04:41:45.896594894 +0000 UTC m=+1.140780366" Sep 16 04:41:45.897088 kubelet[3280]: I0916 04:41:45.897052 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.0.0-n-404d4275b5" podStartSLOduration=1.897043252 podStartE2EDuration="1.897043252s" podCreationTimestamp="2025-09-16 04:41:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:41:45.896880703 +0000 UTC m=+1.141066183" watchObservedRunningTime="2025-09-16 04:41:45.897043252 +0000 UTC m=+1.141228732" Sep 16 04:41:45.921923 kubelet[3280]: I0916 04:41:45.921731 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.0.0-n-404d4275b5" podStartSLOduration=2.921724268 podStartE2EDuration="2.921724268s" podCreationTimestamp="2025-09-16 04:41:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:41:45.907456318 +0000 UTC m=+1.151641798" watchObservedRunningTime="2025-09-16 04:41:45.921724268 +0000 UTC m=+1.165909748" Sep 16 04:41:47.679843 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Sep 16 04:41:49.122801 kubelet[3280]: I0916 04:41:49.122720 3280 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 16 04:41:49.124037 kubelet[3280]: I0916 04:41:49.123727 3280 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 16 04:41:49.124062 containerd[1865]: time="2025-09-16T04:41:49.123328047Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 16 04:41:49.942540 systemd[1]: Created slice kubepods-besteffort-podbd01128d_a74d_46c3_b1a4_6a88fe039f68.slice - libcontainer container kubepods-besteffort-podbd01128d_a74d_46c3_b1a4_6a88fe039f68.slice. Sep 16 04:41:49.944094 kubelet[3280]: I0916 04:41:49.943785 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bd01128d-a74d-46c3-b1a4-6a88fe039f68-kube-proxy\") pod \"kube-proxy-7xf46\" (UID: \"bd01128d-a74d-46c3-b1a4-6a88fe039f68\") " pod="kube-system/kube-proxy-7xf46" Sep 16 04:41:49.944094 kubelet[3280]: I0916 04:41:49.943820 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd01128d-a74d-46c3-b1a4-6a88fe039f68-xtables-lock\") pod \"kube-proxy-7xf46\" (UID: \"bd01128d-a74d-46c3-b1a4-6a88fe039f68\") " pod="kube-system/kube-proxy-7xf46" Sep 16 04:41:49.944094 kubelet[3280]: I0916 04:41:49.943833 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd01128d-a74d-46c3-b1a4-6a88fe039f68-lib-modules\") pod \"kube-proxy-7xf46\" (UID: \"bd01128d-a74d-46c3-b1a4-6a88fe039f68\") " pod="kube-system/kube-proxy-7xf46" Sep 16 04:41:49.944094 kubelet[3280]: I0916 04:41:49.943847 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k26gw\" (UniqueName: \"kubernetes.io/projected/bd01128d-a74d-46c3-b1a4-6a88fe039f68-kube-api-access-k26gw\") pod \"kube-proxy-7xf46\" (UID: \"bd01128d-a74d-46c3-b1a4-6a88fe039f68\") " pod="kube-system/kube-proxy-7xf46" Sep 16 04:41:50.129934 systemd[1]: Created slice kubepods-besteffort-podd2540c0b_93b9_43b4_9fc9_37b7bd2c09ef.slice - libcontainer container kubepods-besteffort-podd2540c0b_93b9_43b4_9fc9_37b7bd2c09ef.slice. Sep 16 04:41:50.145963 kubelet[3280]: I0916 04:41:50.145918 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d2540c0b-93b9-43b4-9fc9-37b7bd2c09ef-var-lib-calico\") pod \"tigera-operator-755d956888-fq2r7\" (UID: \"d2540c0b-93b9-43b4-9fc9-37b7bd2c09ef\") " pod="tigera-operator/tigera-operator-755d956888-fq2r7" Sep 16 04:41:50.146304 kubelet[3280]: I0916 04:41:50.146262 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st6gt\" (UniqueName: \"kubernetes.io/projected/d2540c0b-93b9-43b4-9fc9-37b7bd2c09ef-kube-api-access-st6gt\") pod \"tigera-operator-755d956888-fq2r7\" (UID: \"d2540c0b-93b9-43b4-9fc9-37b7bd2c09ef\") " pod="tigera-operator/tigera-operator-755d956888-fq2r7" Sep 16 04:41:50.251209 containerd[1865]: time="2025-09-16T04:41:50.251180072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7xf46,Uid:bd01128d-a74d-46c3-b1a4-6a88fe039f68,Namespace:kube-system,Attempt:0,}" Sep 16 04:41:50.297263 containerd[1865]: time="2025-09-16T04:41:50.297040902Z" level=info msg="connecting to shim 6297be775569def44d9ca92649d55639e08e8494fefe5b9ff27356cf792c00ed" address="unix:///run/containerd/s/0efb972a9bfd9f9efa75e9e0c45bbf4fb61d6482ed60f2fdf2f8a127b5425328" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:41:50.316743 systemd[1]: Started cri-containerd-6297be775569def44d9ca92649d55639e08e8494fefe5b9ff27356cf792c00ed.scope - libcontainer container 6297be775569def44d9ca92649d55639e08e8494fefe5b9ff27356cf792c00ed. Sep 16 04:41:50.336438 containerd[1865]: time="2025-09-16T04:41:50.336405443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7xf46,Uid:bd01128d-a74d-46c3-b1a4-6a88fe039f68,Namespace:kube-system,Attempt:0,} returns sandbox id \"6297be775569def44d9ca92649d55639e08e8494fefe5b9ff27356cf792c00ed\"" Sep 16 04:41:50.339790 containerd[1865]: time="2025-09-16T04:41:50.339755255Z" level=info msg="CreateContainer within sandbox \"6297be775569def44d9ca92649d55639e08e8494fefe5b9ff27356cf792c00ed\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 16 04:41:50.360984 containerd[1865]: time="2025-09-16T04:41:50.360944862Z" level=info msg="Container e34d42c8fb3112f51fceee5faaa8fd8fe1cb7ba3268e7f8e75caeb7652cbfffe: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:41:50.380988 containerd[1865]: time="2025-09-16T04:41:50.380955370Z" level=info msg="CreateContainer within sandbox \"6297be775569def44d9ca92649d55639e08e8494fefe5b9ff27356cf792c00ed\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e34d42c8fb3112f51fceee5faaa8fd8fe1cb7ba3268e7f8e75caeb7652cbfffe\"" Sep 16 04:41:50.381476 containerd[1865]: time="2025-09-16T04:41:50.381452033Z" level=info msg="StartContainer for \"e34d42c8fb3112f51fceee5faaa8fd8fe1cb7ba3268e7f8e75caeb7652cbfffe\"" Sep 16 04:41:50.383376 containerd[1865]: time="2025-09-16T04:41:50.383342345Z" level=info msg="connecting to shim e34d42c8fb3112f51fceee5faaa8fd8fe1cb7ba3268e7f8e75caeb7652cbfffe" address="unix:///run/containerd/s/0efb972a9bfd9f9efa75e9e0c45bbf4fb61d6482ed60f2fdf2f8a127b5425328" protocol=ttrpc version=3 Sep 16 04:41:50.395716 systemd[1]: Started cri-containerd-e34d42c8fb3112f51fceee5faaa8fd8fe1cb7ba3268e7f8e75caeb7652cbfffe.scope - libcontainer container e34d42c8fb3112f51fceee5faaa8fd8fe1cb7ba3268e7f8e75caeb7652cbfffe. Sep 16 04:41:50.423145 containerd[1865]: time="2025-09-16T04:41:50.423113058Z" level=info msg="StartContainer for \"e34d42c8fb3112f51fceee5faaa8fd8fe1cb7ba3268e7f8e75caeb7652cbfffe\" returns successfully" Sep 16 04:41:50.433367 containerd[1865]: time="2025-09-16T04:41:50.433336931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-fq2r7,Uid:d2540c0b-93b9-43b4-9fc9-37b7bd2c09ef,Namespace:tigera-operator,Attempt:0,}" Sep 16 04:41:50.479481 containerd[1865]: time="2025-09-16T04:41:50.479444624Z" level=info msg="connecting to shim 7b74cf9e64f0f6970ab71298684edc30f7c17f79563f89ce8c3b2f3c65433b56" address="unix:///run/containerd/s/59682c55a65d7531599ed1de6e80a0c108b45b6be1a043a24b0ac9449957c1dd" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:41:50.500154 systemd[1]: Started cri-containerd-7b74cf9e64f0f6970ab71298684edc30f7c17f79563f89ce8c3b2f3c65433b56.scope - libcontainer container 7b74cf9e64f0f6970ab71298684edc30f7c17f79563f89ce8c3b2f3c65433b56. Sep 16 04:41:50.531010 containerd[1865]: time="2025-09-16T04:41:50.530879893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-fq2r7,Uid:d2540c0b-93b9-43b4-9fc9-37b7bd2c09ef,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7b74cf9e64f0f6970ab71298684edc30f7c17f79563f89ce8c3b2f3c65433b56\"" Sep 16 04:41:50.533662 containerd[1865]: time="2025-09-16T04:41:50.533644543Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 16 04:41:51.055641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2917602263.mount: Deactivated successfully. Sep 16 04:41:51.875044 update_engine[1849]: I20250916 04:41:51.874990 1849 update_attempter.cc:509] Updating boot flags... Sep 16 04:41:51.998538 kubelet[3280]: I0916 04:41:51.997959 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7xf46" podStartSLOduration=2.997934822 podStartE2EDuration="2.997934822s" podCreationTimestamp="2025-09-16 04:41:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:41:50.893649108 +0000 UTC m=+6.137834636" watchObservedRunningTime="2025-09-16 04:41:51.997934822 +0000 UTC m=+7.242120302" Sep 16 04:41:52.379483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2672509552.mount: Deactivated successfully. Sep 16 04:41:52.972149 containerd[1865]: time="2025-09-16T04:41:52.972103748Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:52.976237 containerd[1865]: time="2025-09-16T04:41:52.976207038Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 16 04:41:52.979288 containerd[1865]: time="2025-09-16T04:41:52.979262553Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:52.983974 containerd[1865]: time="2025-09-16T04:41:52.983945477Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:41:52.984330 containerd[1865]: time="2025-09-16T04:41:52.984305424Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.450558837s" Sep 16 04:41:52.984409 containerd[1865]: time="2025-09-16T04:41:52.984396698Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 16 04:41:52.986378 containerd[1865]: time="2025-09-16T04:41:52.986353957Z" level=info msg="CreateContainer within sandbox \"7b74cf9e64f0f6970ab71298684edc30f7c17f79563f89ce8c3b2f3c65433b56\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 16 04:41:53.011745 containerd[1865]: time="2025-09-16T04:41:53.011719976Z" level=info msg="Container 06a7ae6e710acc8aaf3f41218239c962f2631a9b4b5de415f56bd13b7f00d751: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:41:53.027495 containerd[1865]: time="2025-09-16T04:41:53.027462861Z" level=info msg="CreateContainer within sandbox \"7b74cf9e64f0f6970ab71298684edc30f7c17f79563f89ce8c3b2f3c65433b56\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"06a7ae6e710acc8aaf3f41218239c962f2631a9b4b5de415f56bd13b7f00d751\"" Sep 16 04:41:53.028123 containerd[1865]: time="2025-09-16T04:41:53.028070968Z" level=info msg="StartContainer for \"06a7ae6e710acc8aaf3f41218239c962f2631a9b4b5de415f56bd13b7f00d751\"" Sep 16 04:41:53.028874 containerd[1865]: time="2025-09-16T04:41:53.028847159Z" level=info msg="connecting to shim 06a7ae6e710acc8aaf3f41218239c962f2631a9b4b5de415f56bd13b7f00d751" address="unix:///run/containerd/s/59682c55a65d7531599ed1de6e80a0c108b45b6be1a043a24b0ac9449957c1dd" protocol=ttrpc version=3 Sep 16 04:41:53.043727 systemd[1]: Started cri-containerd-06a7ae6e710acc8aaf3f41218239c962f2631a9b4b5de415f56bd13b7f00d751.scope - libcontainer container 06a7ae6e710acc8aaf3f41218239c962f2631a9b4b5de415f56bd13b7f00d751. Sep 16 04:41:53.068886 containerd[1865]: time="2025-09-16T04:41:53.068855911Z" level=info msg="StartContainer for \"06a7ae6e710acc8aaf3f41218239c962f2631a9b4b5de415f56bd13b7f00d751\" returns successfully" Sep 16 04:41:53.896222 kubelet[3280]: I0916 04:41:53.896173 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-fq2r7" podStartSLOduration=1.443495746 podStartE2EDuration="3.896158687s" podCreationTimestamp="2025-09-16 04:41:50 +0000 UTC" firstStartedPulling="2025-09-16 04:41:50.532586735 +0000 UTC m=+5.776772207" lastFinishedPulling="2025-09-16 04:41:52.985249676 +0000 UTC m=+8.229435148" observedRunningTime="2025-09-16 04:41:53.896022226 +0000 UTC m=+9.140207706" watchObservedRunningTime="2025-09-16 04:41:53.896158687 +0000 UTC m=+9.140344167" Sep 16 04:41:58.157122 sudo[2331]: pam_unix(sudo:session): session closed for user root Sep 16 04:41:58.245901 sshd[2330]: Connection closed by 10.200.16.10 port 38444 Sep 16 04:41:58.247769 sshd-session[2327]: pam_unix(sshd:session): session closed for user core Sep 16 04:41:58.250975 systemd-logind[1847]: Session 9 logged out. Waiting for processes to exit. Sep 16 04:41:58.251505 systemd[1]: sshd@6-10.200.20.12:22-10.200.16.10:38444.service: Deactivated successfully. Sep 16 04:41:58.254073 systemd[1]: session-9.scope: Deactivated successfully. Sep 16 04:41:58.254279 systemd[1]: session-9.scope: Consumed 3.114s CPU time, 219M memory peak. Sep 16 04:41:58.256644 systemd-logind[1847]: Removed session 9. Sep 16 04:42:02.242371 systemd[1]: Created slice kubepods-besteffort-pod56b6d3c4_9cda_4a0d_a4a3_4921396fcb36.slice - libcontainer container kubepods-besteffort-pod56b6d3c4_9cda_4a0d_a4a3_4921396fcb36.slice. Sep 16 04:42:02.316445 kubelet[3280]: I0916 04:42:02.316364 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56b6d3c4-9cda-4a0d-a4a3-4921396fcb36-tigera-ca-bundle\") pod \"calico-typha-6d84db68cd-57jdn\" (UID: \"56b6d3c4-9cda-4a0d-a4a3-4921396fcb36\") " pod="calico-system/calico-typha-6d84db68cd-57jdn" Sep 16 04:42:02.316445 kubelet[3280]: I0916 04:42:02.316440 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/56b6d3c4-9cda-4a0d-a4a3-4921396fcb36-typha-certs\") pod \"calico-typha-6d84db68cd-57jdn\" (UID: \"56b6d3c4-9cda-4a0d-a4a3-4921396fcb36\") " pod="calico-system/calico-typha-6d84db68cd-57jdn" Sep 16 04:42:02.316820 kubelet[3280]: I0916 04:42:02.316493 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkbz4\" (UniqueName: \"kubernetes.io/projected/56b6d3c4-9cda-4a0d-a4a3-4921396fcb36-kube-api-access-hkbz4\") pod \"calico-typha-6d84db68cd-57jdn\" (UID: \"56b6d3c4-9cda-4a0d-a4a3-4921396fcb36\") " pod="calico-system/calico-typha-6d84db68cd-57jdn" Sep 16 04:42:02.365271 systemd[1]: Created slice kubepods-besteffort-podd5f8f8d9_7d42_47a0_b69b_4870fc87b9b9.slice - libcontainer container kubepods-besteffort-podd5f8f8d9_7d42_47a0_b69b_4870fc87b9b9.slice. Sep 16 04:42:02.416997 kubelet[3280]: I0916 04:42:02.416956 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d5f8f8d9-7d42-47a0-b69b-4870fc87b9b9-cni-log-dir\") pod \"calico-node-z4th7\" (UID: \"d5f8f8d9-7d42-47a0-b69b-4870fc87b9b9\") " pod="calico-system/calico-node-z4th7" Sep 16 04:42:02.417112 kubelet[3280]: I0916 04:42:02.417090 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d5f8f8d9-7d42-47a0-b69b-4870fc87b9b9-cni-net-dir\") pod \"calico-node-z4th7\" (UID: \"d5f8f8d9-7d42-47a0-b69b-4870fc87b9b9\") " pod="calico-system/calico-node-z4th7" Sep 16 04:42:02.417112 kubelet[3280]: I0916 04:42:02.417104 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d5f8f8d9-7d42-47a0-b69b-4870fc87b9b9-policysync\") pod \"calico-node-z4th7\" (UID: \"d5f8f8d9-7d42-47a0-b69b-4870fc87b9b9\") " pod="calico-system/calico-node-z4th7" Sep 16 04:42:02.417159 kubelet[3280]: I0916 04:42:02.417119 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5f8f8d9-7d42-47a0-b69b-4870fc87b9b9-xtables-lock\") pod \"calico-node-z4th7\" (UID: \"d5f8f8d9-7d42-47a0-b69b-4870fc87b9b9\") " pod="calico-system/calico-node-z4th7" Sep 16 04:42:02.417159 kubelet[3280]: I0916 04:42:02.417133 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5f8f8d9-7d42-47a0-b69b-4870fc87b9b9-tigera-ca-bundle\") pod \"calico-node-z4th7\" (UID: \"d5f8f8d9-7d42-47a0-b69b-4870fc87b9b9\") " pod="calico-system/calico-node-z4th7" Sep 16 04:42:02.417315 kubelet[3280]: I0916 04:42:02.417268 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d5f8f8d9-7d42-47a0-b69b-4870fc87b9b9-node-certs\") pod \"calico-node-z4th7\" (UID: \"d5f8f8d9-7d42-47a0-b69b-4870fc87b9b9\") " pod="calico-system/calico-node-z4th7" Sep 16 04:42:02.417315 kubelet[3280]: I0916 04:42:02.417287 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d5f8f8d9-7d42-47a0-b69b-4870fc87b9b9-var-run-calico\") pod \"calico-node-z4th7\" (UID: \"d5f8f8d9-7d42-47a0-b69b-4870fc87b9b9\") " pod="calico-system/calico-node-z4th7" Sep 16 04:42:02.417505 kubelet[3280]: I0916 04:42:02.417477 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d5f8f8d9-7d42-47a0-b69b-4870fc87b9b9-flexvol-driver-host\") pod \"calico-node-z4th7\" (UID: \"d5f8f8d9-7d42-47a0-b69b-4870fc87b9b9\") " pod="calico-system/calico-node-z4th7" Sep 16 04:42:02.417505 kubelet[3280]: I0916 04:42:02.417502 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d5f8f8d9-7d42-47a0-b69b-4870fc87b9b9-var-lib-calico\") pod \"calico-node-z4th7\" (UID: \"d5f8f8d9-7d42-47a0-b69b-4870fc87b9b9\") " pod="calico-system/calico-node-z4th7" Sep 16 04:42:02.417569 kubelet[3280]: I0916 04:42:02.417515 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqhws\" (UniqueName: \"kubernetes.io/projected/d5f8f8d9-7d42-47a0-b69b-4870fc87b9b9-kube-api-access-fqhws\") pod \"calico-node-z4th7\" (UID: \"d5f8f8d9-7d42-47a0-b69b-4870fc87b9b9\") " pod="calico-system/calico-node-z4th7" Sep 16 04:42:02.417569 kubelet[3280]: I0916 04:42:02.417525 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d5f8f8d9-7d42-47a0-b69b-4870fc87b9b9-cni-bin-dir\") pod \"calico-node-z4th7\" (UID: \"d5f8f8d9-7d42-47a0-b69b-4870fc87b9b9\") " pod="calico-system/calico-node-z4th7" Sep 16 04:42:02.417668 kubelet[3280]: I0916 04:42:02.417653 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5f8f8d9-7d42-47a0-b69b-4870fc87b9b9-lib-modules\") pod \"calico-node-z4th7\" (UID: \"d5f8f8d9-7d42-47a0-b69b-4870fc87b9b9\") " pod="calico-system/calico-node-z4th7" Sep 16 04:42:02.505473 kubelet[3280]: E0916 04:42:02.504636 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kp8nh" podUID="10cac60e-b646-4b9b-9967-e70f95c1e33f" Sep 16 04:42:02.525121 kubelet[3280]: E0916 04:42:02.525061 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.525121 kubelet[3280]: W0916 04:42:02.525078 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.525121 kubelet[3280]: E0916 04:42:02.525095 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.542450 kubelet[3280]: E0916 04:42:02.542428 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.542450 kubelet[3280]: W0916 04:42:02.542445 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.542574 kubelet[3280]: E0916 04:42:02.542462 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.549475 containerd[1865]: time="2025-09-16T04:42:02.549227807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d84db68cd-57jdn,Uid:56b6d3c4-9cda-4a0d-a4a3-4921396fcb36,Namespace:calico-system,Attempt:0,}" Sep 16 04:42:02.595154 containerd[1865]: time="2025-09-16T04:42:02.595081234Z" level=info msg="connecting to shim 62b41cf9688fb704716e74701a1b4871c85b8eb79ba7c1d842694f77305daf2b" address="unix:///run/containerd/s/090a33ce8f8a33521527811e1be5468cd4a0aafdc7436f2cd0ed6459459a8215" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:42:02.602660 kubelet[3280]: E0916 04:42:02.601773 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.602660 kubelet[3280]: W0916 04:42:02.602660 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.602859 kubelet[3280]: E0916 04:42:02.602684 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.603098 kubelet[3280]: E0916 04:42:02.603072 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.603144 kubelet[3280]: W0916 04:42:02.603086 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.603144 kubelet[3280]: E0916 04:42:02.603122 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.603356 kubelet[3280]: E0916 04:42:02.603338 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.603356 kubelet[3280]: W0916 04:42:02.603351 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.603751 kubelet[3280]: E0916 04:42:02.603363 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.603861 kubelet[3280]: E0916 04:42:02.603839 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.603861 kubelet[3280]: W0916 04:42:02.603853 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.605748 kubelet[3280]: E0916 04:42:02.603864 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.606183 kubelet[3280]: E0916 04:42:02.606100 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.606183 kubelet[3280]: W0916 04:42:02.606113 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.606255 kubelet[3280]: E0916 04:42:02.606123 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.607584 kubelet[3280]: E0916 04:42:02.607561 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.607584 kubelet[3280]: W0916 04:42:02.607576 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.607584 kubelet[3280]: E0916 04:42:02.607586 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.608869 kubelet[3280]: E0916 04:42:02.608655 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.608869 kubelet[3280]: W0916 04:42:02.608674 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.608869 kubelet[3280]: E0916 04:42:02.608684 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.608869 kubelet[3280]: E0916 04:42:02.608799 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.608869 kubelet[3280]: W0916 04:42:02.608804 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.608869 kubelet[3280]: E0916 04:42:02.608810 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.609008 kubelet[3280]: E0916 04:42:02.608913 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.609008 kubelet[3280]: W0916 04:42:02.608920 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.609008 kubelet[3280]: E0916 04:42:02.608926 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.609051 kubelet[3280]: E0916 04:42:02.609019 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.609051 kubelet[3280]: W0916 04:42:02.609024 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.609051 kubelet[3280]: E0916 04:42:02.609029 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.609159 kubelet[3280]: E0916 04:42:02.609147 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.609159 kubelet[3280]: W0916 04:42:02.609158 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.609205 kubelet[3280]: E0916 04:42:02.609165 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.609653 kubelet[3280]: E0916 04:42:02.609510 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.609653 kubelet[3280]: W0916 04:42:02.609523 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.609653 kubelet[3280]: E0916 04:42:02.609534 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.610061 kubelet[3280]: E0916 04:42:02.610037 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.610061 kubelet[3280]: W0916 04:42:02.610051 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.610061 kubelet[3280]: E0916 04:42:02.610060 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.610816 kubelet[3280]: E0916 04:42:02.610781 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.610816 kubelet[3280]: W0916 04:42:02.610793 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.610816 kubelet[3280]: E0916 04:42:02.610802 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.610906 kubelet[3280]: E0916 04:42:02.610898 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.610906 kubelet[3280]: W0916 04:42:02.610903 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.610937 kubelet[3280]: E0916 04:42:02.610908 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.612785 kubelet[3280]: E0916 04:42:02.610995 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.612785 kubelet[3280]: W0916 04:42:02.611002 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.612785 kubelet[3280]: E0916 04:42:02.611007 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.612785 kubelet[3280]: E0916 04:42:02.611100 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.612785 kubelet[3280]: W0916 04:42:02.611104 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.612785 kubelet[3280]: E0916 04:42:02.611109 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.612785 kubelet[3280]: E0916 04:42:02.611262 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.612785 kubelet[3280]: W0916 04:42:02.611267 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.612785 kubelet[3280]: E0916 04:42:02.611274 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.612785 kubelet[3280]: E0916 04:42:02.611510 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.612983 kubelet[3280]: W0916 04:42:02.611517 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.612983 kubelet[3280]: E0916 04:42:02.611524 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.612983 kubelet[3280]: E0916 04:42:02.612749 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.612983 kubelet[3280]: W0916 04:42:02.612761 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.612983 kubelet[3280]: E0916 04:42:02.612771 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.619211 kubelet[3280]: E0916 04:42:02.619197 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.619445 kubelet[3280]: W0916 04:42:02.619288 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.619445 kubelet[3280]: E0916 04:42:02.619304 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.619445 kubelet[3280]: I0916 04:42:02.619328 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/10cac60e-b646-4b9b-9967-e70f95c1e33f-varrun\") pod \"csi-node-driver-kp8nh\" (UID: \"10cac60e-b646-4b9b-9967-e70f95c1e33f\") " pod="calico-system/csi-node-driver-kp8nh" Sep 16 04:42:02.619679 kubelet[3280]: E0916 04:42:02.619557 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.619679 kubelet[3280]: W0916 04:42:02.619569 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.619679 kubelet[3280]: E0916 04:42:02.619579 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.619679 kubelet[3280]: I0916 04:42:02.619594 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44vt2\" (UniqueName: \"kubernetes.io/projected/10cac60e-b646-4b9b-9967-e70f95c1e33f-kube-api-access-44vt2\") pod \"csi-node-driver-kp8nh\" (UID: \"10cac60e-b646-4b9b-9967-e70f95c1e33f\") " pod="calico-system/csi-node-driver-kp8nh" Sep 16 04:42:02.621839 kubelet[3280]: E0916 04:42:02.621725 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.621839 kubelet[3280]: W0916 04:42:02.621740 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.621839 kubelet[3280]: E0916 04:42:02.621754 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.621839 kubelet[3280]: I0916 04:42:02.621768 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/10cac60e-b646-4b9b-9967-e70f95c1e33f-socket-dir\") pod \"csi-node-driver-kp8nh\" (UID: \"10cac60e-b646-4b9b-9967-e70f95c1e33f\") " pod="calico-system/csi-node-driver-kp8nh" Sep 16 04:42:02.622199 kubelet[3280]: E0916 04:42:02.622184 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.622337 kubelet[3280]: W0916 04:42:02.622261 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.622337 kubelet[3280]: E0916 04:42:02.622291 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.622337 kubelet[3280]: I0916 04:42:02.622321 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/10cac60e-b646-4b9b-9967-e70f95c1e33f-registration-dir\") pod \"csi-node-driver-kp8nh\" (UID: \"10cac60e-b646-4b9b-9967-e70f95c1e33f\") " pod="calico-system/csi-node-driver-kp8nh" Sep 16 04:42:02.622677 kubelet[3280]: E0916 04:42:02.622663 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.622827 kubelet[3280]: W0916 04:42:02.622749 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.622971 kubelet[3280]: E0916 04:42:02.622955 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.623873 kubelet[3280]: E0916 04:42:02.623762 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.623873 kubelet[3280]: W0916 04:42:02.623788 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.623873 kubelet[3280]: E0916 04:42:02.623817 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.624335 kubelet[3280]: E0916 04:42:02.624179 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.624335 kubelet[3280]: W0916 04:42:02.624217 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.624335 kubelet[3280]: E0916 04:42:02.624243 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.624690 kubelet[3280]: E0916 04:42:02.624567 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.624690 kubelet[3280]: W0916 04:42:02.624579 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.624690 kubelet[3280]: E0916 04:42:02.624629 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.624690 kubelet[3280]: I0916 04:42:02.624659 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/10cac60e-b646-4b9b-9967-e70f95c1e33f-kubelet-dir\") pod \"csi-node-driver-kp8nh\" (UID: \"10cac60e-b646-4b9b-9967-e70f95c1e33f\") " pod="calico-system/csi-node-driver-kp8nh" Sep 16 04:42:02.625071 kubelet[3280]: E0916 04:42:02.624962 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.625071 kubelet[3280]: W0916 04:42:02.624976 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.625071 kubelet[3280]: E0916 04:42:02.624999 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.625261 kubelet[3280]: E0916 04:42:02.625251 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.625397 kubelet[3280]: W0916 04:42:02.625315 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.625397 kubelet[3280]: E0916 04:42:02.625329 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.626458 kubelet[3280]: E0916 04:42:02.626443 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.626570 kubelet[3280]: W0916 04:42:02.626543 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.626697 kubelet[3280]: E0916 04:42:02.626683 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.627075 kubelet[3280]: E0916 04:42:02.627052 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.627075 kubelet[3280]: W0916 04:42:02.627067 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.627075 kubelet[3280]: E0916 04:42:02.627077 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.628721 kubelet[3280]: E0916 04:42:02.628701 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.628721 kubelet[3280]: W0916 04:42:02.628716 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.628818 kubelet[3280]: E0916 04:42:02.628727 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.629872 kubelet[3280]: E0916 04:42:02.629856 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.629872 kubelet[3280]: W0916 04:42:02.629868 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.629872 kubelet[3280]: E0916 04:42:02.629878 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.630130 kubelet[3280]: E0916 04:42:02.629999 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.630130 kubelet[3280]: W0916 04:42:02.630008 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.630130 kubelet[3280]: E0916 04:42:02.630015 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.639969 systemd[1]: Started cri-containerd-62b41cf9688fb704716e74701a1b4871c85b8eb79ba7c1d842694f77305daf2b.scope - libcontainer container 62b41cf9688fb704716e74701a1b4871c85b8eb79ba7c1d842694f77305daf2b. Sep 16 04:42:02.670064 containerd[1865]: time="2025-09-16T04:42:02.669942622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z4th7,Uid:d5f8f8d9-7d42-47a0-b69b-4870fc87b9b9,Namespace:calico-system,Attempt:0,}" Sep 16 04:42:02.711496 containerd[1865]: time="2025-09-16T04:42:02.711449228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d84db68cd-57jdn,Uid:56b6d3c4-9cda-4a0d-a4a3-4921396fcb36,Namespace:calico-system,Attempt:0,} returns sandbox id \"62b41cf9688fb704716e74701a1b4871c85b8eb79ba7c1d842694f77305daf2b\"" Sep 16 04:42:02.713667 containerd[1865]: time="2025-09-16T04:42:02.713601646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 16 04:42:02.730382 containerd[1865]: time="2025-09-16T04:42:02.730201910Z" level=info msg="connecting to shim f57868e137c91dd5aa3a5b9c2ca2a5c7819865ea624bd71492c705208e5f4720" address="unix:///run/containerd/s/e38fd07102204b98c9c7f9cf6b8fb4e402fd21d63cfd96e27da6d09e04855b5d" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:42:02.730445 kubelet[3280]: E0916 04:42:02.730318 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.730445 kubelet[3280]: W0916 04:42:02.730332 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.730445 kubelet[3280]: E0916 04:42:02.730355 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.730533 kubelet[3280]: E0916 04:42:02.730518 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.730533 kubelet[3280]: W0916 04:42:02.730526 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.730562 kubelet[3280]: E0916 04:42:02.730535 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.730679 kubelet[3280]: E0916 04:42:02.730663 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.730679 kubelet[3280]: W0916 04:42:02.730673 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.730845 kubelet[3280]: E0916 04:42:02.730685 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.730845 kubelet[3280]: E0916 04:42:02.730807 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.730845 kubelet[3280]: W0916 04:42:02.730813 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.730845 kubelet[3280]: E0916 04:42:02.730823 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.730959 kubelet[3280]: E0916 04:42:02.730946 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.730959 kubelet[3280]: W0916 04:42:02.730955 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.731190 kubelet[3280]: E0916 04:42:02.730961 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.731190 kubelet[3280]: E0916 04:42:02.731096 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.731190 kubelet[3280]: W0916 04:42:02.731101 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.731190 kubelet[3280]: E0916 04:42:02.731112 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.731263 kubelet[3280]: E0916 04:42:02.731203 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.731263 kubelet[3280]: W0916 04:42:02.731207 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.731297 kubelet[3280]: E0916 04:42:02.731286 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.731297 kubelet[3280]: W0916 04:42:02.731291 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.731324 kubelet[3280]: E0916 04:42:02.731296 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.731379 kubelet[3280]: E0916 04:42:02.731365 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.731435 kubelet[3280]: E0916 04:42:02.731397 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.731435 kubelet[3280]: W0916 04:42:02.731401 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.731435 kubelet[3280]: E0916 04:42:02.731408 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.731586 kubelet[3280]: E0916 04:42:02.731500 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.731586 kubelet[3280]: W0916 04:42:02.731507 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.731586 kubelet[3280]: E0916 04:42:02.731515 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.731698 kubelet[3280]: E0916 04:42:02.731686 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.731698 kubelet[3280]: W0916 04:42:02.731694 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.731698 kubelet[3280]: E0916 04:42:02.731704 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.732952 kubelet[3280]: E0916 04:42:02.732844 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.734110 kubelet[3280]: W0916 04:42:02.734078 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.734110 kubelet[3280]: E0916 04:42:02.734107 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.737445 kubelet[3280]: E0916 04:42:02.737424 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.737445 kubelet[3280]: W0916 04:42:02.737440 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.737735 kubelet[3280]: E0916 04:42:02.737719 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.737735 kubelet[3280]: W0916 04:42:02.737730 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.738203 kubelet[3280]: E0916 04:42:02.738175 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.738203 kubelet[3280]: E0916 04:42:02.738199 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.738831 kubelet[3280]: E0916 04:42:02.738811 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.738831 kubelet[3280]: W0916 04:42:02.738826 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.739239 kubelet[3280]: E0916 04:42:02.739049 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.739549 kubelet[3280]: E0916 04:42:02.739502 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.739549 kubelet[3280]: W0916 04:42:02.739516 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.739549 kubelet[3280]: E0916 04:42:02.739527 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.740551 kubelet[3280]: E0916 04:42:02.740525 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.740551 kubelet[3280]: W0916 04:42:02.740540 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.740551 kubelet[3280]: E0916 04:42:02.740556 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.741337 kubelet[3280]: E0916 04:42:02.741190 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.741337 kubelet[3280]: W0916 04:42:02.741288 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.741337 kubelet[3280]: E0916 04:42:02.741302 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.742394 kubelet[3280]: E0916 04:42:02.742349 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.742394 kubelet[3280]: W0916 04:42:02.742370 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.742394 kubelet[3280]: E0916 04:42:02.742383 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.742660 kubelet[3280]: E0916 04:42:02.742643 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.742660 kubelet[3280]: W0916 04:42:02.742656 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.742877 kubelet[3280]: E0916 04:42:02.742665 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.743563 kubelet[3280]: E0916 04:42:02.743379 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.743563 kubelet[3280]: W0916 04:42:02.743393 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.743563 kubelet[3280]: E0916 04:42:02.743404 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.744474 kubelet[3280]: E0916 04:42:02.744134 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.744474 kubelet[3280]: W0916 04:42:02.744147 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.744474 kubelet[3280]: E0916 04:42:02.744163 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.744474 kubelet[3280]: E0916 04:42:02.744357 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.744474 kubelet[3280]: W0916 04:42:02.744364 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.744474 kubelet[3280]: E0916 04:42:02.744371 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.745723 kubelet[3280]: E0916 04:42:02.745279 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.745723 kubelet[3280]: W0916 04:42:02.745294 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.746797 kubelet[3280]: E0916 04:42:02.745305 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.747027 kubelet[3280]: E0916 04:42:02.746988 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.747027 kubelet[3280]: W0916 04:42:02.747008 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.747027 kubelet[3280]: E0916 04:42:02.747021 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.757917 kubelet[3280]: E0916 04:42:02.757002 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:02.757917 kubelet[3280]: W0916 04:42:02.757059 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:02.757917 kubelet[3280]: E0916 04:42:02.757076 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:02.767861 systemd[1]: Started cri-containerd-f57868e137c91dd5aa3a5b9c2ca2a5c7819865ea624bd71492c705208e5f4720.scope - libcontainer container f57868e137c91dd5aa3a5b9c2ca2a5c7819865ea624bd71492c705208e5f4720. Sep 16 04:42:02.802091 containerd[1865]: time="2025-09-16T04:42:02.802049031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z4th7,Uid:d5f8f8d9-7d42-47a0-b69b-4870fc87b9b9,Namespace:calico-system,Attempt:0,} returns sandbox id \"f57868e137c91dd5aa3a5b9c2ca2a5c7819865ea624bd71492c705208e5f4720\"" Sep 16 04:42:03.823481 kubelet[3280]: E0916 04:42:03.823416 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kp8nh" podUID="10cac60e-b646-4b9b-9967-e70f95c1e33f" Sep 16 04:42:03.923442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount790193273.mount: Deactivated successfully. Sep 16 04:42:04.319650 containerd[1865]: time="2025-09-16T04:42:04.319164564Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:04.322820 containerd[1865]: time="2025-09-16T04:42:04.322706072Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 16 04:42:04.327024 containerd[1865]: time="2025-09-16T04:42:04.326994770Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:04.331702 containerd[1865]: time="2025-09-16T04:42:04.331673744Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:04.332117 containerd[1865]: time="2025-09-16T04:42:04.332093581Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.61844099s" Sep 16 04:42:04.332187 containerd[1865]: time="2025-09-16T04:42:04.332175464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 16 04:42:04.333125 containerd[1865]: time="2025-09-16T04:42:04.333099164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 16 04:42:04.342273 containerd[1865]: time="2025-09-16T04:42:04.342245274Z" level=info msg="CreateContainer within sandbox \"62b41cf9688fb704716e74701a1b4871c85b8eb79ba7c1d842694f77305daf2b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 16 04:42:04.360638 containerd[1865]: time="2025-09-16T04:42:04.360592224Z" level=info msg="Container 0f8a73d647543fac4070c751cd10dd34a5677585b802a49ab369c0dc3ae46cdb: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:42:04.381167 containerd[1865]: time="2025-09-16T04:42:04.381141657Z" level=info msg="CreateContainer within sandbox \"62b41cf9688fb704716e74701a1b4871c85b8eb79ba7c1d842694f77305daf2b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0f8a73d647543fac4070c751cd10dd34a5677585b802a49ab369c0dc3ae46cdb\"" Sep 16 04:42:04.382446 containerd[1865]: time="2025-09-16T04:42:04.382422328Z" level=info msg="StartContainer for \"0f8a73d647543fac4070c751cd10dd34a5677585b802a49ab369c0dc3ae46cdb\"" Sep 16 04:42:04.383222 containerd[1865]: time="2025-09-16T04:42:04.383194743Z" level=info msg="connecting to shim 0f8a73d647543fac4070c751cd10dd34a5677585b802a49ab369c0dc3ae46cdb" address="unix:///run/containerd/s/090a33ce8f8a33521527811e1be5468cd4a0aafdc7436f2cd0ed6459459a8215" protocol=ttrpc version=3 Sep 16 04:42:04.399744 systemd[1]: Started cri-containerd-0f8a73d647543fac4070c751cd10dd34a5677585b802a49ab369c0dc3ae46cdb.scope - libcontainer container 0f8a73d647543fac4070c751cd10dd34a5677585b802a49ab369c0dc3ae46cdb. Sep 16 04:42:04.444451 containerd[1865]: time="2025-09-16T04:42:04.444417821Z" level=info msg="StartContainer for \"0f8a73d647543fac4070c751cd10dd34a5677585b802a49ab369c0dc3ae46cdb\" returns successfully" Sep 16 04:42:04.925025 kubelet[3280]: E0916 04:42:04.924950 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.925025 kubelet[3280]: W0916 04:42:04.924969 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.925340 kubelet[3280]: E0916 04:42:04.924986 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:04.926443 kubelet[3280]: E0916 04:42:04.925621 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.926443 kubelet[3280]: W0916 04:42:04.925680 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.926443 kubelet[3280]: E0916 04:42:04.925717 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:04.926537 kubelet[3280]: E0916 04:42:04.926467 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.926537 kubelet[3280]: W0916 04:42:04.926479 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.926537 kubelet[3280]: E0916 04:42:04.926489 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:04.926715 kubelet[3280]: E0916 04:42:04.926678 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.926715 kubelet[3280]: W0916 04:42:04.926687 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.926715 kubelet[3280]: E0916 04:42:04.926696 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:04.927190 kubelet[3280]: E0916 04:42:04.926820 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.927190 kubelet[3280]: W0916 04:42:04.926827 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.927190 kubelet[3280]: E0916 04:42:04.926836 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:04.927190 kubelet[3280]: E0916 04:42:04.926939 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.927190 kubelet[3280]: W0916 04:42:04.926944 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.927190 kubelet[3280]: E0916 04:42:04.926951 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:04.927190 kubelet[3280]: E0916 04:42:04.927046 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.927190 kubelet[3280]: W0916 04:42:04.927052 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.927190 kubelet[3280]: E0916 04:42:04.927058 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:04.927190 kubelet[3280]: E0916 04:42:04.927161 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.928495 kubelet[3280]: W0916 04:42:04.927166 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.928495 kubelet[3280]: E0916 04:42:04.927172 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:04.928495 kubelet[3280]: E0916 04:42:04.927280 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.928495 kubelet[3280]: W0916 04:42:04.927285 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.928495 kubelet[3280]: E0916 04:42:04.927291 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:04.928495 kubelet[3280]: E0916 04:42:04.927455 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.928495 kubelet[3280]: W0916 04:42:04.927462 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.928495 kubelet[3280]: E0916 04:42:04.927468 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:04.928495 kubelet[3280]: E0916 04:42:04.927575 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.928495 kubelet[3280]: W0916 04:42:04.927581 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.929477 kubelet[3280]: E0916 04:42:04.927586 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:04.929477 kubelet[3280]: E0916 04:42:04.927709 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.929477 kubelet[3280]: W0916 04:42:04.927715 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.929477 kubelet[3280]: E0916 04:42:04.927722 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:04.929477 kubelet[3280]: E0916 04:42:04.928017 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.929477 kubelet[3280]: W0916 04:42:04.928026 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.929477 kubelet[3280]: E0916 04:42:04.928035 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:04.929477 kubelet[3280]: E0916 04:42:04.928167 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.929477 kubelet[3280]: W0916 04:42:04.928172 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.929477 kubelet[3280]: E0916 04:42:04.928179 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:04.929726 kubelet[3280]: E0916 04:42:04.928271 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.929726 kubelet[3280]: W0916 04:42:04.928276 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.929726 kubelet[3280]: E0916 04:42:04.928292 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:04.943924 kubelet[3280]: E0916 04:42:04.943871 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.943924 kubelet[3280]: W0916 04:42:04.943885 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.943924 kubelet[3280]: E0916 04:42:04.943895 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:04.944154 kubelet[3280]: E0916 04:42:04.944142 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.944154 kubelet[3280]: W0916 04:42:04.944151 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.944285 kubelet[3280]: E0916 04:42:04.944161 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:04.944373 kubelet[3280]: E0916 04:42:04.944363 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.944373 kubelet[3280]: W0916 04:42:04.944370 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.944414 kubelet[3280]: E0916 04:42:04.944380 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:04.944536 kubelet[3280]: E0916 04:42:04.944526 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.944536 kubelet[3280]: W0916 04:42:04.944534 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.944603 kubelet[3280]: E0916 04:42:04.944543 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:04.944752 kubelet[3280]: E0916 04:42:04.944740 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.944752 kubelet[3280]: W0916 04:42:04.944748 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.944871 kubelet[3280]: E0916 04:42:04.944759 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:04.944925 kubelet[3280]: E0916 04:42:04.944914 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.944925 kubelet[3280]: W0916 04:42:04.944921 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.944963 kubelet[3280]: E0916 04:42:04.944930 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:04.945095 kubelet[3280]: E0916 04:42:04.945085 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.945095 kubelet[3280]: W0916 04:42:04.945092 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.945138 kubelet[3280]: E0916 04:42:04.945100 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:04.945363 kubelet[3280]: E0916 04:42:04.945301 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.945363 kubelet[3280]: W0916 04:42:04.945315 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.945363 kubelet[3280]: E0916 04:42:04.945331 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:04.945595 kubelet[3280]: E0916 04:42:04.945553 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.945595 kubelet[3280]: W0916 04:42:04.945562 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.945595 kubelet[3280]: E0916 04:42:04.945581 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:04.945848 kubelet[3280]: E0916 04:42:04.945801 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.945848 kubelet[3280]: W0916 04:42:04.945813 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.945848 kubelet[3280]: E0916 04:42:04.945830 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:04.946094 kubelet[3280]: E0916 04:42:04.946031 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.946094 kubelet[3280]: W0916 04:42:04.946041 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.946094 kubelet[3280]: E0916 04:42:04.946056 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:04.946353 kubelet[3280]: E0916 04:42:04.946281 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.946353 kubelet[3280]: W0916 04:42:04.946292 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.946353 kubelet[3280]: E0916 04:42:04.946307 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:04.946553 kubelet[3280]: E0916 04:42:04.946543 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.946736 kubelet[3280]: W0916 04:42:04.946625 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.946736 kubelet[3280]: E0916 04:42:04.946646 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:04.946969 kubelet[3280]: E0916 04:42:04.946956 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.946969 kubelet[3280]: W0916 04:42:04.946966 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.947110 kubelet[3280]: E0916 04:42:04.946976 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:04.947179 kubelet[3280]: E0916 04:42:04.947169 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.947179 kubelet[3280]: W0916 04:42:04.947176 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.947294 kubelet[3280]: E0916 04:42:04.947185 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:04.947364 kubelet[3280]: E0916 04:42:04.947354 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.947364 kubelet[3280]: W0916 04:42:04.947361 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.947407 kubelet[3280]: E0916 04:42:04.947370 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:04.947572 kubelet[3280]: E0916 04:42:04.947550 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.947572 kubelet[3280]: W0916 04:42:04.947561 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.947733 kubelet[3280]: E0916 04:42:04.947678 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:04.947924 kubelet[3280]: E0916 04:42:04.947888 3280 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 16 04:42:04.947924 kubelet[3280]: W0916 04:42:04.947898 3280 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 16 04:42:04.947924 kubelet[3280]: E0916 04:42:04.947907 3280 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 16 04:42:05.612953 containerd[1865]: time="2025-09-16T04:42:05.612899960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:05.615872 containerd[1865]: time="2025-09-16T04:42:05.615844961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 16 04:42:05.619650 containerd[1865]: time="2025-09-16T04:42:05.619603980Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:05.623678 containerd[1865]: time="2025-09-16T04:42:05.623640094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:05.624164 containerd[1865]: time="2025-09-16T04:42:05.623896774Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.290574604s" Sep 16 04:42:05.624164 containerd[1865]: time="2025-09-16T04:42:05.623925487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 16 04:42:05.626641 containerd[1865]: time="2025-09-16T04:42:05.626586216Z" level=info msg="CreateContainer within sandbox \"f57868e137c91dd5aa3a5b9c2ca2a5c7819865ea624bd71492c705208e5f4720\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 16 04:42:05.649036 containerd[1865]: time="2025-09-16T04:42:05.649008426Z" level=info msg="Container 21e90bac2cd6807e9f4c491094fec9132464e8c2276a90307a0f0b21ad6eb15c: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:42:05.667292 containerd[1865]: time="2025-09-16T04:42:05.667260813Z" level=info msg="CreateContainer within sandbox \"f57868e137c91dd5aa3a5b9c2ca2a5c7819865ea624bd71492c705208e5f4720\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"21e90bac2cd6807e9f4c491094fec9132464e8c2276a90307a0f0b21ad6eb15c\"" Sep 16 04:42:05.668227 containerd[1865]: time="2025-09-16T04:42:05.668205465Z" level=info msg="StartContainer for \"21e90bac2cd6807e9f4c491094fec9132464e8c2276a90307a0f0b21ad6eb15c\"" Sep 16 04:42:05.669480 containerd[1865]: time="2025-09-16T04:42:05.669455991Z" level=info msg="connecting to shim 21e90bac2cd6807e9f4c491094fec9132464e8c2276a90307a0f0b21ad6eb15c" address="unix:///run/containerd/s/e38fd07102204b98c9c7f9cf6b8fb4e402fd21d63cfd96e27da6d09e04855b5d" protocol=ttrpc version=3 Sep 16 04:42:05.687732 systemd[1]: Started cri-containerd-21e90bac2cd6807e9f4c491094fec9132464e8c2276a90307a0f0b21ad6eb15c.scope - libcontainer container 21e90bac2cd6807e9f4c491094fec9132464e8c2276a90307a0f0b21ad6eb15c. Sep 16 04:42:05.725437 containerd[1865]: time="2025-09-16T04:42:05.725402029Z" level=info msg="StartContainer for \"21e90bac2cd6807e9f4c491094fec9132464e8c2276a90307a0f0b21ad6eb15c\" returns successfully" Sep 16 04:42:05.730940 systemd[1]: cri-containerd-21e90bac2cd6807e9f4c491094fec9132464e8c2276a90307a0f0b21ad6eb15c.scope: Deactivated successfully. Sep 16 04:42:05.735493 containerd[1865]: time="2025-09-16T04:42:05.735395861Z" level=info msg="received exit event container_id:\"21e90bac2cd6807e9f4c491094fec9132464e8c2276a90307a0f0b21ad6eb15c\" id:\"21e90bac2cd6807e9f4c491094fec9132464e8c2276a90307a0f0b21ad6eb15c\" pid:3998 exited_at:{seconds:1757997725 nanos:734541963}" Sep 16 04:42:05.735493 containerd[1865]: time="2025-09-16T04:42:05.735434870Z" level=info msg="TaskExit event in podsandbox handler container_id:\"21e90bac2cd6807e9f4c491094fec9132464e8c2276a90307a0f0b21ad6eb15c\" id:\"21e90bac2cd6807e9f4c491094fec9132464e8c2276a90307a0f0b21ad6eb15c\" pid:3998 exited_at:{seconds:1757997725 nanos:734541963}" Sep 16 04:42:05.748875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21e90bac2cd6807e9f4c491094fec9132464e8c2276a90307a0f0b21ad6eb15c-rootfs.mount: Deactivated successfully. Sep 16 04:42:05.824083 kubelet[3280]: E0916 04:42:05.823763 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kp8nh" podUID="10cac60e-b646-4b9b-9967-e70f95c1e33f" Sep 16 04:42:05.918875 kubelet[3280]: I0916 04:42:05.918471 3280 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:42:05.936500 kubelet[3280]: I0916 04:42:05.936156 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6d84db68cd-57jdn" podStartSLOduration=2.316438385 podStartE2EDuration="3.936141549s" podCreationTimestamp="2025-09-16 04:42:02 +0000 UTC" firstStartedPulling="2025-09-16 04:42:02.71314656 +0000 UTC m=+17.957332032" lastFinishedPulling="2025-09-16 04:42:04.332849724 +0000 UTC m=+19.577035196" observedRunningTime="2025-09-16 04:42:04.923839303 +0000 UTC m=+20.168024783" watchObservedRunningTime="2025-09-16 04:42:05.936141549 +0000 UTC m=+21.180327021" Sep 16 04:42:07.823432 kubelet[3280]: E0916 04:42:07.823360 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kp8nh" podUID="10cac60e-b646-4b9b-9967-e70f95c1e33f" Sep 16 04:42:07.925364 containerd[1865]: time="2025-09-16T04:42:07.924944672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 16 04:42:09.823090 kubelet[3280]: E0916 04:42:09.823045 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kp8nh" podUID="10cac60e-b646-4b9b-9967-e70f95c1e33f" Sep 16 04:42:10.046992 containerd[1865]: time="2025-09-16T04:42:10.046942954Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:10.050185 containerd[1865]: time="2025-09-16T04:42:10.050150867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 16 04:42:10.054676 containerd[1865]: time="2025-09-16T04:42:10.054631428Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:10.059730 containerd[1865]: time="2025-09-16T04:42:10.059694651Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:10.060448 containerd[1865]: time="2025-09-16T04:42:10.060421582Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.134771249s" Sep 16 04:42:10.060448 containerd[1865]: time="2025-09-16T04:42:10.060448103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 16 04:42:10.063599 containerd[1865]: time="2025-09-16T04:42:10.063547644Z" level=info msg="CreateContainer within sandbox \"f57868e137c91dd5aa3a5b9c2ca2a5c7819865ea624bd71492c705208e5f4720\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 16 04:42:10.087362 containerd[1865]: time="2025-09-16T04:42:10.087278075Z" level=info msg="Container 19abad76c339acff8fe762bf187b95911831a6c5e31182e579df52b390789801: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:42:10.117885 containerd[1865]: time="2025-09-16T04:42:10.117847883Z" level=info msg="CreateContainer within sandbox \"f57868e137c91dd5aa3a5b9c2ca2a5c7819865ea624bd71492c705208e5f4720\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"19abad76c339acff8fe762bf187b95911831a6c5e31182e579df52b390789801\"" Sep 16 04:42:10.118455 containerd[1865]: time="2025-09-16T04:42:10.118405896Z" level=info msg="StartContainer for \"19abad76c339acff8fe762bf187b95911831a6c5e31182e579df52b390789801\"" Sep 16 04:42:10.119466 containerd[1865]: time="2025-09-16T04:42:10.119434071Z" level=info msg="connecting to shim 19abad76c339acff8fe762bf187b95911831a6c5e31182e579df52b390789801" address="unix:///run/containerd/s/e38fd07102204b98c9c7f9cf6b8fb4e402fd21d63cfd96e27da6d09e04855b5d" protocol=ttrpc version=3 Sep 16 04:42:10.142720 systemd[1]: Started cri-containerd-19abad76c339acff8fe762bf187b95911831a6c5e31182e579df52b390789801.scope - libcontainer container 19abad76c339acff8fe762bf187b95911831a6c5e31182e579df52b390789801. Sep 16 04:42:10.175197 containerd[1865]: time="2025-09-16T04:42:10.175161780Z" level=info msg="StartContainer for \"19abad76c339acff8fe762bf187b95911831a6c5e31182e579df52b390789801\" returns successfully" Sep 16 04:42:11.688927 containerd[1865]: time="2025-09-16T04:42:11.688870421Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 04:42:11.691177 systemd[1]: cri-containerd-19abad76c339acff8fe762bf187b95911831a6c5e31182e579df52b390789801.scope: Deactivated successfully. Sep 16 04:42:11.691479 systemd[1]: cri-containerd-19abad76c339acff8fe762bf187b95911831a6c5e31182e579df52b390789801.scope: Consumed 321ms CPU time, 193.8M memory peak, 165.8M written to disk. Sep 16 04:42:11.695130 containerd[1865]: time="2025-09-16T04:42:11.695097208Z" level=info msg="received exit event container_id:\"19abad76c339acff8fe762bf187b95911831a6c5e31182e579df52b390789801\" id:\"19abad76c339acff8fe762bf187b95911831a6c5e31182e579df52b390789801\" pid:4058 exited_at:{seconds:1757997731 nanos:694594389}" Sep 16 04:42:11.695488 containerd[1865]: time="2025-09-16T04:42:11.695466582Z" level=info msg="TaskExit event in podsandbox handler container_id:\"19abad76c339acff8fe762bf187b95911831a6c5e31182e579df52b390789801\" id:\"19abad76c339acff8fe762bf187b95911831a6c5e31182e579df52b390789801\" pid:4058 exited_at:{seconds:1757997731 nanos:694594389}" Sep 16 04:42:11.713071 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19abad76c339acff8fe762bf187b95911831a6c5e31182e579df52b390789801-rootfs.mount: Deactivated successfully. Sep 16 04:42:11.776629 kubelet[3280]: I0916 04:42:11.776341 3280 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 16 04:42:11.940835 kubelet[3280]: I0916 04:42:11.891452 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9411887e-95b8-4feb-8af5-65296f88374a-config-volume\") pod \"coredns-668d6bf9bc-d54s2\" (UID: \"9411887e-95b8-4feb-8af5-65296f88374a\") " pod="kube-system/coredns-668d6bf9bc-d54s2" Sep 16 04:42:11.940835 kubelet[3280]: I0916 04:42:11.891511 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjfgc\" (UniqueName: \"kubernetes.io/projected/50f30276-5c1b-4f47-90a5-73b141a40f1c-kube-api-access-jjfgc\") pod \"calico-kube-controllers-6b6466f845-jd2sh\" (UID: \"50f30276-5c1b-4f47-90a5-73b141a40f1c\") " pod="calico-system/calico-kube-controllers-6b6466f845-jd2sh" Sep 16 04:42:11.940835 kubelet[3280]: I0916 04:42:11.891651 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6fdb1262-eead-4398-8cae-b3e060954c4e-calico-apiserver-certs\") pod \"calico-apiserver-7b46d565d6-2bmc2\" (UID: \"6fdb1262-eead-4398-8cae-b3e060954c4e\") " pod="calico-apiserver/calico-apiserver-7b46d565d6-2bmc2" Sep 16 04:42:11.940835 kubelet[3280]: I0916 04:42:11.891668 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd4zz\" (UniqueName: \"kubernetes.io/projected/6fdb1262-eead-4398-8cae-b3e060954c4e-kube-api-access-wd4zz\") pod \"calico-apiserver-7b46d565d6-2bmc2\" (UID: \"6fdb1262-eead-4398-8cae-b3e060954c4e\") " pod="calico-apiserver/calico-apiserver-7b46d565d6-2bmc2" Sep 16 04:42:11.940835 kubelet[3280]: I0916 04:42:11.891684 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2vjj\" (UniqueName: \"kubernetes.io/projected/9411887e-95b8-4feb-8af5-65296f88374a-kube-api-access-l2vjj\") pod \"coredns-668d6bf9bc-d54s2\" (UID: \"9411887e-95b8-4feb-8af5-65296f88374a\") " pod="kube-system/coredns-668d6bf9bc-d54s2" Sep 16 04:42:11.819916 systemd[1]: Created slice kubepods-burstable-pod9411887e_95b8_4feb_8af5_65296f88374a.slice - libcontainer container kubepods-burstable-pod9411887e_95b8_4feb_8af5_65296f88374a.slice. Sep 16 04:42:11.941076 kubelet[3280]: I0916 04:42:11.891792 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06dcaf54-5135-46f9-a579-681a9ddff71a-config\") pod \"goldmane-54d579b49d-xmq74\" (UID: \"06dcaf54-5135-46f9-a579-681a9ddff71a\") " pod="calico-system/goldmane-54d579b49d-xmq74" Sep 16 04:42:11.941076 kubelet[3280]: I0916 04:42:11.891817 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8ebb8cf-d133-46d0-bb77-906589d0067d-config-volume\") pod \"coredns-668d6bf9bc-bv4px\" (UID: \"e8ebb8cf-d133-46d0-bb77-906589d0067d\") " pod="kube-system/coredns-668d6bf9bc-bv4px" Sep 16 04:42:11.941076 kubelet[3280]: I0916 04:42:11.891831 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8840262-d485-42fc-987f-ea3bb66db3b0-whisker-ca-bundle\") pod \"whisker-54b778b8f8-pgff6\" (UID: \"d8840262-d485-42fc-987f-ea3bb66db3b0\") " pod="calico-system/whisker-54b778b8f8-pgff6" Sep 16 04:42:11.941076 kubelet[3280]: I0916 04:42:11.891843 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvjwj\" (UniqueName: \"kubernetes.io/projected/d8840262-d485-42fc-987f-ea3bb66db3b0-kube-api-access-jvjwj\") pod \"whisker-54b778b8f8-pgff6\" (UID: \"d8840262-d485-42fc-987f-ea3bb66db3b0\") " pod="calico-system/whisker-54b778b8f8-pgff6" Sep 16 04:42:11.941076 kubelet[3280]: I0916 04:42:11.891959 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/06dcaf54-5135-46f9-a579-681a9ddff71a-goldmane-key-pair\") pod \"goldmane-54d579b49d-xmq74\" (UID: \"06dcaf54-5135-46f9-a579-681a9ddff71a\") " pod="calico-system/goldmane-54d579b49d-xmq74" Sep 16 04:42:11.830301 systemd[1]: Created slice kubepods-besteffort-pod10cac60e_b646_4b9b_9967_e70f95c1e33f.slice - libcontainer container kubepods-besteffort-pod10cac60e_b646_4b9b_9967_e70f95c1e33f.slice. Sep 16 04:42:11.941189 kubelet[3280]: I0916 04:42:11.891979 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50f30276-5c1b-4f47-90a5-73b141a40f1c-tigera-ca-bundle\") pod \"calico-kube-controllers-6b6466f845-jd2sh\" (UID: \"50f30276-5c1b-4f47-90a5-73b141a40f1c\") " pod="calico-system/calico-kube-controllers-6b6466f845-jd2sh" Sep 16 04:42:11.941189 kubelet[3280]: I0916 04:42:11.891992 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c870852e-7bb4-4f75-b61b-79ac5929ece7-calico-apiserver-certs\") pod \"calico-apiserver-7b46d565d6-8h62f\" (UID: \"c870852e-7bb4-4f75-b61b-79ac5929ece7\") " pod="calico-apiserver/calico-apiserver-7b46d565d6-8h62f" Sep 16 04:42:11.941189 kubelet[3280]: I0916 04:42:11.892006 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0cfdadd1-dfc9-4ba8-a56a-b44112d278d5-calico-apiserver-certs\") pod \"calico-apiserver-68cdb9d4d4-g2x98\" (UID: \"0cfdadd1-dfc9-4ba8-a56a-b44112d278d5\") " pod="calico-apiserver/calico-apiserver-68cdb9d4d4-g2x98" Sep 16 04:42:11.941189 kubelet[3280]: I0916 04:42:11.892215 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06dcaf54-5135-46f9-a579-681a9ddff71a-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-xmq74\" (UID: \"06dcaf54-5135-46f9-a579-681a9ddff71a\") " pod="calico-system/goldmane-54d579b49d-xmq74" Sep 16 04:42:11.941189 kubelet[3280]: I0916 04:42:11.892243 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hwxd\" (UniqueName: \"kubernetes.io/projected/c870852e-7bb4-4f75-b61b-79ac5929ece7-kube-api-access-6hwxd\") pod \"calico-apiserver-7b46d565d6-8h62f\" (UID: \"c870852e-7bb4-4f75-b61b-79ac5929ece7\") " pod="calico-apiserver/calico-apiserver-7b46d565d6-8h62f" Sep 16 04:42:11.846421 systemd[1]: Created slice kubepods-besteffort-pod50f30276_5c1b_4f47_90a5_73b141a40f1c.slice - libcontainer container kubepods-besteffort-pod50f30276_5c1b_4f47_90a5_73b141a40f1c.slice. Sep 16 04:42:11.941295 kubelet[3280]: I0916 04:42:11.892403 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt6sz\" (UniqueName: \"kubernetes.io/projected/0cfdadd1-dfc9-4ba8-a56a-b44112d278d5-kube-api-access-jt6sz\") pod \"calico-apiserver-68cdb9d4d4-g2x98\" (UID: \"0cfdadd1-dfc9-4ba8-a56a-b44112d278d5\") " pod="calico-apiserver/calico-apiserver-68cdb9d4d4-g2x98" Sep 16 04:42:11.941295 kubelet[3280]: I0916 04:42:11.892427 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5wzz\" (UniqueName: \"kubernetes.io/projected/06dcaf54-5135-46f9-a579-681a9ddff71a-kube-api-access-s5wzz\") pod \"goldmane-54d579b49d-xmq74\" (UID: \"06dcaf54-5135-46f9-a579-681a9ddff71a\") " pod="calico-system/goldmane-54d579b49d-xmq74" Sep 16 04:42:11.941295 kubelet[3280]: I0916 04:42:11.892438 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfm28\" (UniqueName: \"kubernetes.io/projected/e8ebb8cf-d133-46d0-bb77-906589d0067d-kube-api-access-mfm28\") pod \"coredns-668d6bf9bc-bv4px\" (UID: \"e8ebb8cf-d133-46d0-bb77-906589d0067d\") " pod="kube-system/coredns-668d6bf9bc-bv4px" Sep 16 04:42:11.941295 kubelet[3280]: I0916 04:42:11.892454 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d8840262-d485-42fc-987f-ea3bb66db3b0-whisker-backend-key-pair\") pod \"whisker-54b778b8f8-pgff6\" (UID: \"d8840262-d485-42fc-987f-ea3bb66db3b0\") " pod="calico-system/whisker-54b778b8f8-pgff6" Sep 16 04:42:11.852061 systemd[1]: Created slice kubepods-besteffort-pod6fdb1262_eead_4398_8cae_b3e060954c4e.slice - libcontainer container kubepods-besteffort-pod6fdb1262_eead_4398_8cae_b3e060954c4e.slice. Sep 16 04:42:11.860670 systemd[1]: Created slice kubepods-burstable-pode8ebb8cf_d133_46d0_bb77_906589d0067d.slice - libcontainer container kubepods-burstable-pode8ebb8cf_d133_46d0_bb77_906589d0067d.slice. Sep 16 04:42:11.868713 systemd[1]: Created slice kubepods-besteffort-podc870852e_7bb4_4f75_b61b_79ac5929ece7.slice - libcontainer container kubepods-besteffort-podc870852e_7bb4_4f75_b61b_79ac5929ece7.slice. Sep 16 04:42:11.874106 systemd[1]: Created slice kubepods-besteffort-pod0cfdadd1_dfc9_4ba8_a56a_b44112d278d5.slice - libcontainer container kubepods-besteffort-pod0cfdadd1_dfc9_4ba8_a56a_b44112d278d5.slice. Sep 16 04:42:11.879131 systemd[1]: Created slice kubepods-besteffort-pod06dcaf54_5135_46f9_a579_681a9ddff71a.slice - libcontainer container kubepods-besteffort-pod06dcaf54_5135_46f9_a579_681a9ddff71a.slice. Sep 16 04:42:11.883223 systemd[1]: Created slice kubepods-besteffort-podd8840262_d485_42fc_987f_ea3bb66db3b0.slice - libcontainer container kubepods-besteffort-podd8840262_d485_42fc_987f_ea3bb66db3b0.slice. Sep 16 04:42:11.942293 containerd[1865]: time="2025-09-16T04:42:11.941709274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kp8nh,Uid:10cac60e-b646-4b9b-9967-e70f95c1e33f,Namespace:calico-system,Attempt:0,}" Sep 16 04:42:12.213146 containerd[1865]: time="2025-09-16T04:42:12.213085283Z" level=error msg="Failed to destroy network for sandbox \"e9c65fdc560649c964680e56db64eb29377c0406d3aaa4dad2ce5b4cfa897c08\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:42:12.238418 containerd[1865]: time="2025-09-16T04:42:12.238315370Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kp8nh,Uid:10cac60e-b646-4b9b-9967-e70f95c1e33f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9c65fdc560649c964680e56db64eb29377c0406d3aaa4dad2ce5b4cfa897c08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:42:12.238418 containerd[1865]: time="2025-09-16T04:42:12.238384453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d54s2,Uid:9411887e-95b8-4feb-8af5-65296f88374a,Namespace:kube-system,Attempt:0,}" Sep 16 04:42:12.238798 kubelet[3280]: E0916 04:42:12.238760 3280 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9c65fdc560649c964680e56db64eb29377c0406d3aaa4dad2ce5b4cfa897c08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:42:12.238946 kubelet[3280]: E0916 04:42:12.238900 3280 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9c65fdc560649c964680e56db64eb29377c0406d3aaa4dad2ce5b4cfa897c08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kp8nh" Sep 16 04:42:12.238946 kubelet[3280]: E0916 04:42:12.238920 3280 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9c65fdc560649c964680e56db64eb29377c0406d3aaa4dad2ce5b4cfa897c08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kp8nh" Sep 16 04:42:12.239156 kubelet[3280]: E0916 04:42:12.239037 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kp8nh_calico-system(10cac60e-b646-4b9b-9967-e70f95c1e33f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kp8nh_calico-system(10cac60e-b646-4b9b-9967-e70f95c1e33f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9c65fdc560649c964680e56db64eb29377c0406d3aaa4dad2ce5b4cfa897c08\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kp8nh" podUID="10cac60e-b646-4b9b-9967-e70f95c1e33f" Sep 16 04:42:12.245593 containerd[1865]: time="2025-09-16T04:42:12.245511433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54b778b8f8-pgff6,Uid:d8840262-d485-42fc-987f-ea3bb66db3b0,Namespace:calico-system,Attempt:0,}" Sep 16 04:42:12.245872 containerd[1865]: time="2025-09-16T04:42:12.245848734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bv4px,Uid:e8ebb8cf-d133-46d0-bb77-906589d0067d,Namespace:kube-system,Attempt:0,}" Sep 16 04:42:12.246028 containerd[1865]: time="2025-09-16T04:42:12.246007828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b6466f845-jd2sh,Uid:50f30276-5c1b-4f47-90a5-73b141a40f1c,Namespace:calico-system,Attempt:0,}" Sep 16 04:42:12.246228 containerd[1865]: time="2025-09-16T04:42:12.246204619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b46d565d6-2bmc2,Uid:6fdb1262-eead-4398-8cae-b3e060954c4e,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:42:12.246381 containerd[1865]: time="2025-09-16T04:42:12.246361681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-xmq74,Uid:06dcaf54-5135-46f9-a579-681a9ddff71a,Namespace:calico-system,Attempt:0,}" Sep 16 04:42:12.246529 containerd[1865]: time="2025-09-16T04:42:12.246511311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b46d565d6-8h62f,Uid:c870852e-7bb4-4f75-b61b-79ac5929ece7,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:42:12.246802 containerd[1865]: time="2025-09-16T04:42:12.246648860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68cdb9d4d4-g2x98,Uid:0cfdadd1-dfc9-4ba8-a56a-b44112d278d5,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:42:12.280790 containerd[1865]: time="2025-09-16T04:42:12.280748954Z" level=error msg="Failed to destroy network for sandbox \"cf8e4c1223cb631feb3fc0b1b9f7ab3882cd497de7fec04580d4cfdf35e81f28\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:42:12.354373 containerd[1865]: time="2025-09-16T04:42:12.354284910Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d54s2,Uid:9411887e-95b8-4feb-8af5-65296f88374a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf8e4c1223cb631feb3fc0b1b9f7ab3882cd497de7fec04580d4cfdf35e81f28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:42:12.355257 kubelet[3280]: E0916 04:42:12.355122 3280 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf8e4c1223cb631feb3fc0b1b9f7ab3882cd497de7fec04580d4cfdf35e81f28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:42:12.355424 kubelet[3280]: E0916 04:42:12.355285 3280 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf8e4c1223cb631feb3fc0b1b9f7ab3882cd497de7fec04580d4cfdf35e81f28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d54s2" Sep 16 04:42:12.355424 kubelet[3280]: E0916 04:42:12.355305 3280 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf8e4c1223cb631feb3fc0b1b9f7ab3882cd497de7fec04580d4cfdf35e81f28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d54s2" Sep 16 04:42:12.355424 kubelet[3280]: E0916 04:42:12.355350 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-d54s2_kube-system(9411887e-95b8-4feb-8af5-65296f88374a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-d54s2_kube-system(9411887e-95b8-4feb-8af5-65296f88374a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf8e4c1223cb631feb3fc0b1b9f7ab3882cd497de7fec04580d4cfdf35e81f28\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-d54s2" podUID="9411887e-95b8-4feb-8af5-65296f88374a" Sep 16 04:42:12.395753 containerd[1865]: time="2025-09-16T04:42:12.395710904Z" level=error msg="Failed to destroy network for sandbox \"177ad80414b229319a5c9e14c1b812957c28b9394537ad16376115d1dc01df9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:42:12.406139 containerd[1865]: time="2025-09-16T04:42:12.405945082Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54b778b8f8-pgff6,Uid:d8840262-d485-42fc-987f-ea3bb66db3b0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"177ad80414b229319a5c9e14c1b812957c28b9394537ad16376115d1dc01df9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:42:12.406271 kubelet[3280]: E0916 04:42:12.406172 3280 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"177ad80414b229319a5c9e14c1b812957c28b9394537ad16376115d1dc01df9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:42:12.406271 kubelet[3280]: E0916 04:42:12.406226 3280 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"177ad80414b229319a5c9e14c1b812957c28b9394537ad16376115d1dc01df9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54b778b8f8-pgff6" Sep 16 04:42:12.406271 kubelet[3280]: E0916 04:42:12.406242 3280 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"177ad80414b229319a5c9e14c1b812957c28b9394537ad16376115d1dc01df9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54b778b8f8-pgff6" Sep 16 04:42:12.406794 kubelet[3280]: E0916 04:42:12.406667 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-54b778b8f8-pgff6_calico-system(d8840262-d485-42fc-987f-ea3bb66db3b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-54b778b8f8-pgff6_calico-system(d8840262-d485-42fc-987f-ea3bb66db3b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"177ad80414b229319a5c9e14c1b812957c28b9394537ad16376115d1dc01df9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54b778b8f8-pgff6" podUID="d8840262-d485-42fc-987f-ea3bb66db3b0" Sep 16 04:42:12.417658 containerd[1865]: time="2025-09-16T04:42:12.417618842Z" level=error msg="Failed to destroy network for sandbox \"f9e915c6a47936fe8b218bea3d9c3310e1a6b935805e13e13353e728d1d9128d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:42:12.423024 containerd[1865]: time="2025-09-16T04:42:12.422979581Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bv4px,Uid:e8ebb8cf-d133-46d0-bb77-906589d0067d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9e915c6a47936fe8b218bea3d9c3310e1a6b935805e13e13353e728d1d9128d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:42:12.423597 kubelet[3280]: E0916 04:42:12.423313 3280 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9e915c6a47936fe8b218bea3d9c3310e1a6b935805e13e13353e728d1d9128d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:42:12.423597 kubelet[3280]: E0916 04:42:12.423357 3280 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9e915c6a47936fe8b218bea3d9c3310e1a6b935805e13e13353e728d1d9128d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bv4px" Sep 16 04:42:12.423597 kubelet[3280]: E0916 04:42:12.423371 3280 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9e915c6a47936fe8b218bea3d9c3310e1a6b935805e13e13353e728d1d9128d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bv4px" Sep 16 04:42:12.423834 kubelet[3280]: E0916 04:42:12.423399 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-bv4px_kube-system(e8ebb8cf-d133-46d0-bb77-906589d0067d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-bv4px_kube-system(e8ebb8cf-d133-46d0-bb77-906589d0067d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9e915c6a47936fe8b218bea3d9c3310e1a6b935805e13e13353e728d1d9128d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-bv4px" podUID="e8ebb8cf-d133-46d0-bb77-906589d0067d" Sep 16 04:42:12.433360 containerd[1865]: time="2025-09-16T04:42:12.433300906Z" level=error msg="Failed to destroy network for sandbox \"e0c3d60f3308ee2f885ffd28acfabed906a643f69c9bda1784e034db1ccccb19\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:42:12.434469 containerd[1865]: time="2025-09-16T04:42:12.434222460Z" level=error msg="Failed to destroy network for sandbox \"5a06541ab67559ecac950398865f51e9ba09e2bf6a2065c64b7ab8396a3527b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:42:12.438445 containerd[1865]: time="2025-09-16T04:42:12.438416427Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b6466f845-jd2sh,Uid:50f30276-5c1b-4f47-90a5-73b141a40f1c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0c3d60f3308ee2f885ffd28acfabed906a643f69c9bda1784e034db1ccccb19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:42:12.439574 kubelet[3280]: E0916 04:42:12.438705 3280 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0c3d60f3308ee2f885ffd28acfabed906a643f69c9bda1784e034db1ccccb19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:42:12.439574 kubelet[3280]: E0916 04:42:12.438930 3280 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0c3d60f3308ee2f885ffd28acfabed906a643f69c9bda1784e034db1ccccb19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b6466f845-jd2sh" Sep 16 04:42:12.439574 kubelet[3280]: E0916 04:42:12.438949 3280 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0c3d60f3308ee2f885ffd28acfabed906a643f69c9bda1784e034db1ccccb19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b6466f845-jd2sh" Sep 16 04:42:12.439700 kubelet[3280]: E0916 04:42:12.438985 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b6466f845-jd2sh_calico-system(50f30276-5c1b-4f47-90a5-73b141a40f1c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b6466f845-jd2sh_calico-system(50f30276-5c1b-4f47-90a5-73b141a40f1c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0c3d60f3308ee2f885ffd28acfabed906a643f69c9bda1784e034db1ccccb19\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b6466f845-jd2sh" podUID="50f30276-5c1b-4f47-90a5-73b141a40f1c" Sep 16 04:42:12.441912 containerd[1865]: time="2025-09-16T04:42:12.441725439Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b46d565d6-2bmc2,Uid:6fdb1262-eead-4398-8cae-b3e060954c4e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a06541ab67559ecac950398865f51e9ba09e2bf6a2065c64b7ab8396a3527b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:42:12.442396 kubelet[3280]: E0916 04:42:12.442064 3280 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a06541ab67559ecac950398865f51e9ba09e2bf6a2065c64b7ab8396a3527b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:42:12.442396 kubelet[3280]: E0916 04:42:12.442096 3280 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a06541ab67559ecac950398865f51e9ba09e2bf6a2065c64b7ab8396a3527b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b46d565d6-2bmc2" Sep 16 04:42:12.442396 kubelet[3280]: E0916 04:42:12.442111 3280 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a06541ab67559ecac950398865f51e9ba09e2bf6a2065c64b7ab8396a3527b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b46d565d6-2bmc2" Sep 16 04:42:12.442495 kubelet[3280]: E0916 04:42:12.442140 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b46d565d6-2bmc2_calico-apiserver(6fdb1262-eead-4398-8cae-b3e060954c4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b46d565d6-2bmc2_calico-apiserver(6fdb1262-eead-4398-8cae-b3e060954c4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a06541ab67559ecac950398865f51e9ba09e2bf6a2065c64b7ab8396a3527b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b46d565d6-2bmc2" podUID="6fdb1262-eead-4398-8cae-b3e060954c4e" Sep 16 04:42:12.448624 containerd[1865]: time="2025-09-16T04:42:12.448574410Z" level=error msg="Failed to destroy network for sandbox \"a8ef525fc207462caa415710db0e953f97331282a35861103383187858cd0303\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:42:12.452328 containerd[1865]: time="2025-09-16T04:42:12.452290774Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b46d565d6-8h62f,Uid:c870852e-7bb4-4f75-b61b-79ac5929ece7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8ef525fc207462caa415710db0e953f97331282a35861103383187858cd0303\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:42:12.452590 kubelet[3280]: E0916 04:42:12.452452 3280 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8ef525fc207462caa415710db0e953f97331282a35861103383187858cd0303\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:42:12.452590 kubelet[3280]: E0916 04:42:12.452484 3280 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8ef525fc207462caa415710db0e953f97331282a35861103383187858cd0303\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b46d565d6-8h62f" Sep 16 04:42:12.452590 kubelet[3280]: E0916 04:42:12.452498 3280 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8ef525fc207462caa415710db0e953f97331282a35861103383187858cd0303\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b46d565d6-8h62f" Sep 16 04:42:12.452720 kubelet[3280]: E0916 04:42:12.452521 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b46d565d6-8h62f_calico-apiserver(c870852e-7bb4-4f75-b61b-79ac5929ece7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b46d565d6-8h62f_calico-apiserver(c870852e-7bb4-4f75-b61b-79ac5929ece7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8ef525fc207462caa415710db0e953f97331282a35861103383187858cd0303\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b46d565d6-8h62f" podUID="c870852e-7bb4-4f75-b61b-79ac5929ece7" Sep 16 04:42:12.465840 containerd[1865]: time="2025-09-16T04:42:12.465720968Z" level=error msg="Failed to destroy network for sandbox \"5314222b2e20b3cda2e2f48499d0801655dfea911f3531fe02c23aa4a997682c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:42:12.468000 containerd[1865]: time="2025-09-16T04:42:12.467977469Z" level=error msg="Failed to destroy network for sandbox \"082a341f62ba2e5f2b8fdd27809731b65ce07840500d305648fc21f57106fd32\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:42:12.469898 containerd[1865]: time="2025-09-16T04:42:12.469869428Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68cdb9d4d4-g2x98,Uid:0cfdadd1-dfc9-4ba8-a56a-b44112d278d5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5314222b2e20b3cda2e2f48499d0801655dfea911f3531fe02c23aa4a997682c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:42:12.470814 kubelet[3280]: E0916 04:42:12.470167 3280 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5314222b2e20b3cda2e2f48499d0801655dfea911f3531fe02c23aa4a997682c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:42:12.470814 kubelet[3280]: E0916 04:42:12.470214 3280 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5314222b2e20b3cda2e2f48499d0801655dfea911f3531fe02c23aa4a997682c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68cdb9d4d4-g2x98" Sep 16 04:42:12.470814 kubelet[3280]: E0916 04:42:12.470233 3280 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5314222b2e20b3cda2e2f48499d0801655dfea911f3531fe02c23aa4a997682c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68cdb9d4d4-g2x98" Sep 16 04:42:12.471317 kubelet[3280]: E0916 04:42:12.470262 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68cdb9d4d4-g2x98_calico-apiserver(0cfdadd1-dfc9-4ba8-a56a-b44112d278d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68cdb9d4d4-g2x98_calico-apiserver(0cfdadd1-dfc9-4ba8-a56a-b44112d278d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5314222b2e20b3cda2e2f48499d0801655dfea911f3531fe02c23aa4a997682c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68cdb9d4d4-g2x98" podUID="0cfdadd1-dfc9-4ba8-a56a-b44112d278d5" Sep 16 04:42:12.473351 containerd[1865]: time="2025-09-16T04:42:12.473319959Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-xmq74,Uid:06dcaf54-5135-46f9-a579-681a9ddff71a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"082a341f62ba2e5f2b8fdd27809731b65ce07840500d305648fc21f57106fd32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:42:12.473691 kubelet[3280]: E0916 04:42:12.473553 3280 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"082a341f62ba2e5f2b8fdd27809731b65ce07840500d305648fc21f57106fd32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 16 04:42:12.473691 kubelet[3280]: E0916 04:42:12.473585 3280 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"082a341f62ba2e5f2b8fdd27809731b65ce07840500d305648fc21f57106fd32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-xmq74" Sep 16 04:42:12.473691 kubelet[3280]: E0916 04:42:12.473634 3280 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"082a341f62ba2e5f2b8fdd27809731b65ce07840500d305648fc21f57106fd32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-xmq74" Sep 16 04:42:12.473868 kubelet[3280]: E0916 04:42:12.473666 3280 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-xmq74_calico-system(06dcaf54-5135-46f9-a579-681a9ddff71a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-xmq74_calico-system(06dcaf54-5135-46f9-a579-681a9ddff71a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"082a341f62ba2e5f2b8fdd27809731b65ce07840500d305648fc21f57106fd32\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-xmq74" podUID="06dcaf54-5135-46f9-a579-681a9ddff71a" Sep 16 04:42:12.944965 containerd[1865]: time="2025-09-16T04:42:12.944773887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 16 04:42:15.181927 kubelet[3280]: I0916 04:42:15.181750 3280 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:42:16.794987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2624976178.mount: Deactivated successfully. Sep 16 04:42:17.144534 containerd[1865]: time="2025-09-16T04:42:17.144016826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:17.149394 containerd[1865]: time="2025-09-16T04:42:17.149361631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 16 04:42:17.152566 containerd[1865]: time="2025-09-16T04:42:17.152525553Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:17.157645 containerd[1865]: time="2025-09-16T04:42:17.157361039Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:17.157698 containerd[1865]: time="2025-09-16T04:42:17.157670960Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 4.212836647s" Sep 16 04:42:17.157731 containerd[1865]: time="2025-09-16T04:42:17.157701633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 16 04:42:17.170706 containerd[1865]: time="2025-09-16T04:42:17.170682211Z" level=info msg="CreateContainer within sandbox \"f57868e137c91dd5aa3a5b9c2ca2a5c7819865ea624bd71492c705208e5f4720\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 16 04:42:17.200878 containerd[1865]: time="2025-09-16T04:42:17.200847609Z" level=info msg="Container f4fee02196b9e4fd8828c2aa0a9ce23a2673f599d1b90bd498ee291154737863: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:42:17.226637 containerd[1865]: time="2025-09-16T04:42:17.226209234Z" level=info msg="CreateContainer within sandbox \"f57868e137c91dd5aa3a5b9c2ca2a5c7819865ea624bd71492c705208e5f4720\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f4fee02196b9e4fd8828c2aa0a9ce23a2673f599d1b90bd498ee291154737863\"" Sep 16 04:42:17.228621 containerd[1865]: time="2025-09-16T04:42:17.228587107Z" level=info msg="StartContainer for \"f4fee02196b9e4fd8828c2aa0a9ce23a2673f599d1b90bd498ee291154737863\"" Sep 16 04:42:17.229750 containerd[1865]: time="2025-09-16T04:42:17.229718694Z" level=info msg="connecting to shim f4fee02196b9e4fd8828c2aa0a9ce23a2673f599d1b90bd498ee291154737863" address="unix:///run/containerd/s/e38fd07102204b98c9c7f9cf6b8fb4e402fd21d63cfd96e27da6d09e04855b5d" protocol=ttrpc version=3 Sep 16 04:42:17.247737 systemd[1]: Started cri-containerd-f4fee02196b9e4fd8828c2aa0a9ce23a2673f599d1b90bd498ee291154737863.scope - libcontainer container f4fee02196b9e4fd8828c2aa0a9ce23a2673f599d1b90bd498ee291154737863. Sep 16 04:42:17.284016 containerd[1865]: time="2025-09-16T04:42:17.283947541Z" level=info msg="StartContainer for \"f4fee02196b9e4fd8828c2aa0a9ce23a2673f599d1b90bd498ee291154737863\" returns successfully" Sep 16 04:42:17.760101 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 16 04:42:17.760252 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 16 04:42:17.928483 kubelet[3280]: I0916 04:42:17.928440 3280 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d8840262-d485-42fc-987f-ea3bb66db3b0-whisker-backend-key-pair\") pod \"d8840262-d485-42fc-987f-ea3bb66db3b0\" (UID: \"d8840262-d485-42fc-987f-ea3bb66db3b0\") " Sep 16 04:42:17.928483 kubelet[3280]: I0916 04:42:17.928521 3280 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvjwj\" (UniqueName: \"kubernetes.io/projected/d8840262-d485-42fc-987f-ea3bb66db3b0-kube-api-access-jvjwj\") pod \"d8840262-d485-42fc-987f-ea3bb66db3b0\" (UID: \"d8840262-d485-42fc-987f-ea3bb66db3b0\") " Sep 16 04:42:17.928483 kubelet[3280]: I0916 04:42:17.928538 3280 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8840262-d485-42fc-987f-ea3bb66db3b0-whisker-ca-bundle\") pod \"d8840262-d485-42fc-987f-ea3bb66db3b0\" (UID: \"d8840262-d485-42fc-987f-ea3bb66db3b0\") " Sep 16 04:42:17.930065 kubelet[3280]: I0916 04:42:17.928875 3280 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8840262-d485-42fc-987f-ea3bb66db3b0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d8840262-d485-42fc-987f-ea3bb66db3b0" (UID: "d8840262-d485-42fc-987f-ea3bb66db3b0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 16 04:42:17.933330 systemd[1]: var-lib-kubelet-pods-d8840262\x2dd485\x2d42fc\x2d987f\x2dea3bb66db3b0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djvjwj.mount: Deactivated successfully. Sep 16 04:42:17.937122 kubelet[3280]: I0916 04:42:17.937086 3280 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8840262-d485-42fc-987f-ea3bb66db3b0-kube-api-access-jvjwj" (OuterVolumeSpecName: "kube-api-access-jvjwj") pod "d8840262-d485-42fc-987f-ea3bb66db3b0" (UID: "d8840262-d485-42fc-987f-ea3bb66db3b0"). InnerVolumeSpecName "kube-api-access-jvjwj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 04:42:17.938409 kubelet[3280]: I0916 04:42:17.938386 3280 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8840262-d485-42fc-987f-ea3bb66db3b0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d8840262-d485-42fc-987f-ea3bb66db3b0" (UID: "d8840262-d485-42fc-987f-ea3bb66db3b0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 16 04:42:17.938591 systemd[1]: var-lib-kubelet-pods-d8840262\x2dd485\x2d42fc\x2d987f\x2dea3bb66db3b0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 16 04:42:18.030016 kubelet[3280]: I0916 04:42:18.029582 3280 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8840262-d485-42fc-987f-ea3bb66db3b0-whisker-ca-bundle\") on node \"ci-4459.0.0-n-404d4275b5\" DevicePath \"\"" Sep 16 04:42:18.030016 kubelet[3280]: I0916 04:42:18.029730 3280 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d8840262-d485-42fc-987f-ea3bb66db3b0-whisker-backend-key-pair\") on node \"ci-4459.0.0-n-404d4275b5\" DevicePath \"\"" Sep 16 04:42:18.030016 kubelet[3280]: I0916 04:42:18.029741 3280 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jvjwj\" (UniqueName: \"kubernetes.io/projected/d8840262-d485-42fc-987f-ea3bb66db3b0-kube-api-access-jvjwj\") on node \"ci-4459.0.0-n-404d4275b5\" DevicePath \"\"" Sep 16 04:42:18.056314 systemd[1]: Removed slice kubepods-besteffort-podd8840262_d485_42fc_987f_ea3bb66db3b0.slice - libcontainer container kubepods-besteffort-podd8840262_d485_42fc_987f_ea3bb66db3b0.slice. Sep 16 04:42:18.075420 kubelet[3280]: I0916 04:42:18.075255 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-z4th7" podStartSLOduration=1.719937819 podStartE2EDuration="16.075237843s" podCreationTimestamp="2025-09-16 04:42:02 +0000 UTC" firstStartedPulling="2025-09-16 04:42:02.803089111 +0000 UTC m=+18.047274583" lastFinishedPulling="2025-09-16 04:42:17.158389135 +0000 UTC m=+32.402574607" observedRunningTime="2025-09-16 04:42:18.073707342 +0000 UTC m=+33.317892814" watchObservedRunningTime="2025-09-16 04:42:18.075237843 +0000 UTC m=+33.319423315" Sep 16 04:42:18.144325 systemd[1]: Created slice kubepods-besteffort-poda2cac6c4_2217_4981_a1df_8f36cf1f74a0.slice - libcontainer container kubepods-besteffort-poda2cac6c4_2217_4981_a1df_8f36cf1f74a0.slice. Sep 16 04:42:18.231382 kubelet[3280]: I0916 04:42:18.231257 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a2cac6c4-2217-4981-a1df-8f36cf1f74a0-whisker-backend-key-pair\") pod \"whisker-6854867669-6bmxm\" (UID: \"a2cac6c4-2217-4981-a1df-8f36cf1f74a0\") " pod="calico-system/whisker-6854867669-6bmxm" Sep 16 04:42:18.231382 kubelet[3280]: I0916 04:42:18.231305 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2cac6c4-2217-4981-a1df-8f36cf1f74a0-whisker-ca-bundle\") pod \"whisker-6854867669-6bmxm\" (UID: \"a2cac6c4-2217-4981-a1df-8f36cf1f74a0\") " pod="calico-system/whisker-6854867669-6bmxm" Sep 16 04:42:18.231382 kubelet[3280]: I0916 04:42:18.231321 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khrkh\" (UniqueName: \"kubernetes.io/projected/a2cac6c4-2217-4981-a1df-8f36cf1f74a0-kube-api-access-khrkh\") pod \"whisker-6854867669-6bmxm\" (UID: \"a2cac6c4-2217-4981-a1df-8f36cf1f74a0\") " pod="calico-system/whisker-6854867669-6bmxm" Sep 16 04:42:18.446931 containerd[1865]: time="2025-09-16T04:42:18.446822733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6854867669-6bmxm,Uid:a2cac6c4-2217-4981-a1df-8f36cf1f74a0,Namespace:calico-system,Attempt:0,}" Sep 16 04:42:18.565360 systemd-networkd[1693]: cali6cf1a5f1c5c: Link UP Sep 16 04:42:18.566147 systemd-networkd[1693]: cali6cf1a5f1c5c: Gained carrier Sep 16 04:42:18.584890 containerd[1865]: 2025-09-16 04:42:18.469 [INFO][4416] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 16 04:42:18.584890 containerd[1865]: 2025-09-16 04:42:18.490 [INFO][4416] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.0.0--n--404d4275b5-k8s-whisker--6854867669--6bmxm-eth0 whisker-6854867669- calico-system a2cac6c4-2217-4981-a1df-8f36cf1f74a0 875 0 2025-09-16 04:42:18 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6854867669 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.0.0-n-404d4275b5 whisker-6854867669-6bmxm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6cf1a5f1c5c [] [] }} ContainerID="6dcd2c7b882c21e03ea31feb53a4ab03238f047828d91884d56ccbaa684b1ccd" Namespace="calico-system" Pod="whisker-6854867669-6bmxm" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-whisker--6854867669--6bmxm-" Sep 16 04:42:18.584890 containerd[1865]: 2025-09-16 04:42:18.490 [INFO][4416] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6dcd2c7b882c21e03ea31feb53a4ab03238f047828d91884d56ccbaa684b1ccd" Namespace="calico-system" Pod="whisker-6854867669-6bmxm" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-whisker--6854867669--6bmxm-eth0" Sep 16 04:42:18.584890 containerd[1865]: 2025-09-16 04:42:18.510 [INFO][4428] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6dcd2c7b882c21e03ea31feb53a4ab03238f047828d91884d56ccbaa684b1ccd" HandleID="k8s-pod-network.6dcd2c7b882c21e03ea31feb53a4ab03238f047828d91884d56ccbaa684b1ccd" Workload="ci--4459.0.0--n--404d4275b5-k8s-whisker--6854867669--6bmxm-eth0" Sep 16 04:42:18.585098 containerd[1865]: 2025-09-16 04:42:18.510 [INFO][4428] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6dcd2c7b882c21e03ea31feb53a4ab03238f047828d91884d56ccbaa684b1ccd" HandleID="k8s-pod-network.6dcd2c7b882c21e03ea31feb53a4ab03238f047828d91884d56ccbaa684b1ccd" Workload="ci--4459.0.0--n--404d4275b5-k8s-whisker--6854867669--6bmxm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032b4a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.0.0-n-404d4275b5", "pod":"whisker-6854867669-6bmxm", "timestamp":"2025-09-16 04:42:18.510239496 +0000 UTC"}, Hostname:"ci-4459.0.0-n-404d4275b5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:42:18.585098 containerd[1865]: 2025-09-16 04:42:18.510 [INFO][4428] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:42:18.585098 containerd[1865]: 2025-09-16 04:42:18.510 [INFO][4428] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:42:18.585098 containerd[1865]: 2025-09-16 04:42:18.510 [INFO][4428] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.0.0-n-404d4275b5' Sep 16 04:42:18.585098 containerd[1865]: 2025-09-16 04:42:18.516 [INFO][4428] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6dcd2c7b882c21e03ea31feb53a4ab03238f047828d91884d56ccbaa684b1ccd" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:18.585098 containerd[1865]: 2025-09-16 04:42:18.524 [INFO][4428] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:18.585098 containerd[1865]: 2025-09-16 04:42:18.528 [INFO][4428] ipam/ipam.go 511: Trying affinity for 192.168.27.192/26 host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:18.585098 containerd[1865]: 2025-09-16 04:42:18.529 [INFO][4428] ipam/ipam.go 158: Attempting to load block cidr=192.168.27.192/26 host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:18.585098 containerd[1865]: 2025-09-16 04:42:18.531 [INFO][4428] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.27.192/26 host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:18.585264 containerd[1865]: 2025-09-16 04:42:18.531 [INFO][4428] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.27.192/26 handle="k8s-pod-network.6dcd2c7b882c21e03ea31feb53a4ab03238f047828d91884d56ccbaa684b1ccd" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:18.585264 containerd[1865]: 2025-09-16 04:42:18.532 [INFO][4428] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6dcd2c7b882c21e03ea31feb53a4ab03238f047828d91884d56ccbaa684b1ccd Sep 16 04:42:18.585264 containerd[1865]: 2025-09-16 04:42:18.539 [INFO][4428] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.27.192/26 handle="k8s-pod-network.6dcd2c7b882c21e03ea31feb53a4ab03238f047828d91884d56ccbaa684b1ccd" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:18.585264 containerd[1865]: 2025-09-16 04:42:18.544 [INFO][4428] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.27.193/26] block=192.168.27.192/26 handle="k8s-pod-network.6dcd2c7b882c21e03ea31feb53a4ab03238f047828d91884d56ccbaa684b1ccd" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:18.585264 containerd[1865]: 2025-09-16 04:42:18.544 [INFO][4428] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.27.193/26] handle="k8s-pod-network.6dcd2c7b882c21e03ea31feb53a4ab03238f047828d91884d56ccbaa684b1ccd" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:18.585264 containerd[1865]: 2025-09-16 04:42:18.544 [INFO][4428] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:42:18.585264 containerd[1865]: 2025-09-16 04:42:18.544 [INFO][4428] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.27.193/26] IPv6=[] ContainerID="6dcd2c7b882c21e03ea31feb53a4ab03238f047828d91884d56ccbaa684b1ccd" HandleID="k8s-pod-network.6dcd2c7b882c21e03ea31feb53a4ab03238f047828d91884d56ccbaa684b1ccd" Workload="ci--4459.0.0--n--404d4275b5-k8s-whisker--6854867669--6bmxm-eth0" Sep 16 04:42:18.585386 containerd[1865]: 2025-09-16 04:42:18.547 [INFO][4416] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6dcd2c7b882c21e03ea31feb53a4ab03238f047828d91884d56ccbaa684b1ccd" Namespace="calico-system" Pod="whisker-6854867669-6bmxm" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-whisker--6854867669--6bmxm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--404d4275b5-k8s-whisker--6854867669--6bmxm-eth0", GenerateName:"whisker-6854867669-", Namespace:"calico-system", SelfLink:"", UID:"a2cac6c4-2217-4981-a1df-8f36cf1f74a0", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6854867669", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-404d4275b5", ContainerID:"", Pod:"whisker-6854867669-6bmxm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.27.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6cf1a5f1c5c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:42:18.585386 containerd[1865]: 2025-09-16 04:42:18.547 [INFO][4416] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.27.193/32] ContainerID="6dcd2c7b882c21e03ea31feb53a4ab03238f047828d91884d56ccbaa684b1ccd" Namespace="calico-system" Pod="whisker-6854867669-6bmxm" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-whisker--6854867669--6bmxm-eth0" Sep 16 04:42:18.585451 containerd[1865]: 2025-09-16 04:42:18.547 [INFO][4416] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6cf1a5f1c5c ContainerID="6dcd2c7b882c21e03ea31feb53a4ab03238f047828d91884d56ccbaa684b1ccd" Namespace="calico-system" Pod="whisker-6854867669-6bmxm" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-whisker--6854867669--6bmxm-eth0" Sep 16 04:42:18.585451 containerd[1865]: 2025-09-16 04:42:18.568 [INFO][4416] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6dcd2c7b882c21e03ea31feb53a4ab03238f047828d91884d56ccbaa684b1ccd" Namespace="calico-system" Pod="whisker-6854867669-6bmxm" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-whisker--6854867669--6bmxm-eth0" Sep 16 04:42:18.585486 containerd[1865]: 2025-09-16 04:42:18.568 [INFO][4416] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6dcd2c7b882c21e03ea31feb53a4ab03238f047828d91884d56ccbaa684b1ccd" Namespace="calico-system" Pod="whisker-6854867669-6bmxm" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-whisker--6854867669--6bmxm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--404d4275b5-k8s-whisker--6854867669--6bmxm-eth0", GenerateName:"whisker-6854867669-", Namespace:"calico-system", SelfLink:"", UID:"a2cac6c4-2217-4981-a1df-8f36cf1f74a0", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6854867669", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-404d4275b5", ContainerID:"6dcd2c7b882c21e03ea31feb53a4ab03238f047828d91884d56ccbaa684b1ccd", Pod:"whisker-6854867669-6bmxm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.27.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6cf1a5f1c5c", MAC:"86:f5:38:d7:75:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:42:18.585542 containerd[1865]: 2025-09-16 04:42:18.582 [INFO][4416] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6dcd2c7b882c21e03ea31feb53a4ab03238f047828d91884d56ccbaa684b1ccd" Namespace="calico-system" Pod="whisker-6854867669-6bmxm" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-whisker--6854867669--6bmxm-eth0" Sep 16 04:42:18.621013 containerd[1865]: time="2025-09-16T04:42:18.620976010Z" level=info msg="connecting to shim 6dcd2c7b882c21e03ea31feb53a4ab03238f047828d91884d56ccbaa684b1ccd" address="unix:///run/containerd/s/242cc69e8a64c106d20afb5b239c108b2e0dbd38e88a805fbe831b6c951bb1d4" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:42:18.641751 systemd[1]: Started cri-containerd-6dcd2c7b882c21e03ea31feb53a4ab03238f047828d91884d56ccbaa684b1ccd.scope - libcontainer container 6dcd2c7b882c21e03ea31feb53a4ab03238f047828d91884d56ccbaa684b1ccd. Sep 16 04:42:18.673986 containerd[1865]: time="2025-09-16T04:42:18.673877750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6854867669-6bmxm,Uid:a2cac6c4-2217-4981-a1df-8f36cf1f74a0,Namespace:calico-system,Attempt:0,} returns sandbox id \"6dcd2c7b882c21e03ea31feb53a4ab03238f047828d91884d56ccbaa684b1ccd\"" Sep 16 04:42:18.675651 containerd[1865]: time="2025-09-16T04:42:18.675272215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 16 04:42:18.825820 kubelet[3280]: I0916 04:42:18.825677 3280 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8840262-d485-42fc-987f-ea3bb66db3b0" path="/var/lib/kubelet/pods/d8840262-d485-42fc-987f-ea3bb66db3b0/volumes" Sep 16 04:42:19.627817 systemd-networkd[1693]: vxlan.calico: Link UP Sep 16 04:42:19.627826 systemd-networkd[1693]: vxlan.calico: Gained carrier Sep 16 04:42:19.866309 containerd[1865]: time="2025-09-16T04:42:19.866261195Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:19.869956 containerd[1865]: time="2025-09-16T04:42:19.869930000Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 16 04:42:19.873085 containerd[1865]: time="2025-09-16T04:42:19.873059668Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:19.878096 containerd[1865]: time="2025-09-16T04:42:19.878064128Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:19.878748 containerd[1865]: time="2025-09-16T04:42:19.878326832Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.203026216s" Sep 16 04:42:19.878748 containerd[1865]: time="2025-09-16T04:42:19.878349856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 16 04:42:19.880186 containerd[1865]: time="2025-09-16T04:42:19.880130341Z" level=info msg="CreateContainer within sandbox \"6dcd2c7b882c21e03ea31feb53a4ab03238f047828d91884d56ccbaa684b1ccd\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 16 04:42:19.903690 containerd[1865]: time="2025-09-16T04:42:19.903204311Z" level=info msg="Container 456197b2dc5f43c60eb323a1cc915dd8bc5acfd850afb78d934c73c6bba67697: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:42:19.920658 containerd[1865]: time="2025-09-16T04:42:19.920630898Z" level=info msg="CreateContainer within sandbox \"6dcd2c7b882c21e03ea31feb53a4ab03238f047828d91884d56ccbaa684b1ccd\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"456197b2dc5f43c60eb323a1cc915dd8bc5acfd850afb78d934c73c6bba67697\"" Sep 16 04:42:19.921195 containerd[1865]: time="2025-09-16T04:42:19.921167170Z" level=info msg="StartContainer for \"456197b2dc5f43c60eb323a1cc915dd8bc5acfd850afb78d934c73c6bba67697\"" Sep 16 04:42:19.922120 containerd[1865]: time="2025-09-16T04:42:19.922096366Z" level=info msg="connecting to shim 456197b2dc5f43c60eb323a1cc915dd8bc5acfd850afb78d934c73c6bba67697" address="unix:///run/containerd/s/242cc69e8a64c106d20afb5b239c108b2e0dbd38e88a805fbe831b6c951bb1d4" protocol=ttrpc version=3 Sep 16 04:42:19.946076 systemd[1]: Started cri-containerd-456197b2dc5f43c60eb323a1cc915dd8bc5acfd850afb78d934c73c6bba67697.scope - libcontainer container 456197b2dc5f43c60eb323a1cc915dd8bc5acfd850afb78d934c73c6bba67697. Sep 16 04:42:19.992593 containerd[1865]: time="2025-09-16T04:42:19.992454510Z" level=info msg="StartContainer for \"456197b2dc5f43c60eb323a1cc915dd8bc5acfd850afb78d934c73c6bba67697\" returns successfully" Sep 16 04:42:19.995329 containerd[1865]: time="2025-09-16T04:42:19.995278977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 16 04:42:20.272774 systemd-networkd[1693]: cali6cf1a5f1c5c: Gained IPv6LL Sep 16 04:42:20.656896 systemd-networkd[1693]: vxlan.calico: Gained IPv6LL Sep 16 04:42:21.416411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1222989484.mount: Deactivated successfully. Sep 16 04:42:22.086562 containerd[1865]: time="2025-09-16T04:42:22.085768479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:22.089048 containerd[1865]: time="2025-09-16T04:42:22.089016959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 16 04:42:22.124272 containerd[1865]: time="2025-09-16T04:42:22.124224096Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:22.129695 containerd[1865]: time="2025-09-16T04:42:22.129662329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:22.130708 containerd[1865]: time="2025-09-16T04:42:22.130596236Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 2.13528309s" Sep 16 04:42:22.130746 containerd[1865]: time="2025-09-16T04:42:22.130711960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 16 04:42:22.132557 containerd[1865]: time="2025-09-16T04:42:22.132524109Z" level=info msg="CreateContainer within sandbox \"6dcd2c7b882c21e03ea31feb53a4ab03238f047828d91884d56ccbaa684b1ccd\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 16 04:42:22.160899 containerd[1865]: time="2025-09-16T04:42:22.160867211Z" level=info msg="Container fdf3c16428104058f14687d0e6f360e445f8fc8e965e8944d86e8db0c7cfc052: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:42:22.183433 containerd[1865]: time="2025-09-16T04:42:22.183400374Z" level=info msg="CreateContainer within sandbox \"6dcd2c7b882c21e03ea31feb53a4ab03238f047828d91884d56ccbaa684b1ccd\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"fdf3c16428104058f14687d0e6f360e445f8fc8e965e8944d86e8db0c7cfc052\"" Sep 16 04:42:22.184080 containerd[1865]: time="2025-09-16T04:42:22.183968782Z" level=info msg="StartContainer for \"fdf3c16428104058f14687d0e6f360e445f8fc8e965e8944d86e8db0c7cfc052\"" Sep 16 04:42:22.185971 containerd[1865]: time="2025-09-16T04:42:22.185944217Z" level=info msg="connecting to shim fdf3c16428104058f14687d0e6f360e445f8fc8e965e8944d86e8db0c7cfc052" address="unix:///run/containerd/s/242cc69e8a64c106d20afb5b239c108b2e0dbd38e88a805fbe831b6c951bb1d4" protocol=ttrpc version=3 Sep 16 04:42:22.203725 systemd[1]: Started cri-containerd-fdf3c16428104058f14687d0e6f360e445f8fc8e965e8944d86e8db0c7cfc052.scope - libcontainer container fdf3c16428104058f14687d0e6f360e445f8fc8e965e8944d86e8db0c7cfc052. Sep 16 04:42:22.238721 containerd[1865]: time="2025-09-16T04:42:22.238676464Z" level=info msg="StartContainer for \"fdf3c16428104058f14687d0e6f360e445f8fc8e965e8944d86e8db0c7cfc052\" returns successfully" Sep 16 04:42:23.083364 kubelet[3280]: I0916 04:42:23.082927 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6854867669-6bmxm" podStartSLOduration=1.626546116 podStartE2EDuration="5.082911424s" podCreationTimestamp="2025-09-16 04:42:18 +0000 UTC" firstStartedPulling="2025-09-16 04:42:18.67501892 +0000 UTC m=+33.919204400" lastFinishedPulling="2025-09-16 04:42:22.131384236 +0000 UTC m=+37.375569708" observedRunningTime="2025-09-16 04:42:23.077127981 +0000 UTC m=+38.321313461" watchObservedRunningTime="2025-09-16 04:42:23.082911424 +0000 UTC m=+38.327096896" Sep 16 04:42:23.824355 containerd[1865]: time="2025-09-16T04:42:23.824311919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d54s2,Uid:9411887e-95b8-4feb-8af5-65296f88374a,Namespace:kube-system,Attempt:0,}" Sep 16 04:42:23.917381 systemd-networkd[1693]: cali8578ebd716d: Link UP Sep 16 04:42:23.919035 systemd-networkd[1693]: cali8578ebd716d: Gained carrier Sep 16 04:42:23.938025 containerd[1865]: 2025-09-16 04:42:23.857 [INFO][4768] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.0.0--n--404d4275b5-k8s-coredns--668d6bf9bc--d54s2-eth0 coredns-668d6bf9bc- kube-system 9411887e-95b8-4feb-8af5-65296f88374a 794 0 2025-09-16 04:41:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.0.0-n-404d4275b5 coredns-668d6bf9bc-d54s2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8578ebd716d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5274b27c4ae5bc01569c89475d89b9b77f38c65452abb44deaed601542b4af7d" Namespace="kube-system" Pod="coredns-668d6bf9bc-d54s2" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-coredns--668d6bf9bc--d54s2-" Sep 16 04:42:23.938025 containerd[1865]: 2025-09-16 04:42:23.857 [INFO][4768] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5274b27c4ae5bc01569c89475d89b9b77f38c65452abb44deaed601542b4af7d" Namespace="kube-system" Pod="coredns-668d6bf9bc-d54s2" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-coredns--668d6bf9bc--d54s2-eth0" Sep 16 04:42:23.938025 containerd[1865]: 2025-09-16 04:42:23.875 [INFO][4780] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5274b27c4ae5bc01569c89475d89b9b77f38c65452abb44deaed601542b4af7d" HandleID="k8s-pod-network.5274b27c4ae5bc01569c89475d89b9b77f38c65452abb44deaed601542b4af7d" Workload="ci--4459.0.0--n--404d4275b5-k8s-coredns--668d6bf9bc--d54s2-eth0" Sep 16 04:42:23.938799 containerd[1865]: 2025-09-16 04:42:23.876 [INFO][4780] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5274b27c4ae5bc01569c89475d89b9b77f38c65452abb44deaed601542b4af7d" HandleID="k8s-pod-network.5274b27c4ae5bc01569c89475d89b9b77f38c65452abb44deaed601542b4af7d" Workload="ci--4459.0.0--n--404d4275b5-k8s-coredns--668d6bf9bc--d54s2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afa0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.0.0-n-404d4275b5", "pod":"coredns-668d6bf9bc-d54s2", "timestamp":"2025-09-16 04:42:23.875868396 +0000 UTC"}, Hostname:"ci-4459.0.0-n-404d4275b5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:42:23.938799 containerd[1865]: 2025-09-16 04:42:23.876 [INFO][4780] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:42:23.938799 containerd[1865]: 2025-09-16 04:42:23.876 [INFO][4780] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:42:23.938799 containerd[1865]: 2025-09-16 04:42:23.876 [INFO][4780] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.0.0-n-404d4275b5' Sep 16 04:42:23.938799 containerd[1865]: 2025-09-16 04:42:23.881 [INFO][4780] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5274b27c4ae5bc01569c89475d89b9b77f38c65452abb44deaed601542b4af7d" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:23.938799 containerd[1865]: 2025-09-16 04:42:23.885 [INFO][4780] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:23.938799 containerd[1865]: 2025-09-16 04:42:23.889 [INFO][4780] ipam/ipam.go 511: Trying affinity for 192.168.27.192/26 host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:23.938799 containerd[1865]: 2025-09-16 04:42:23.890 [INFO][4780] ipam/ipam.go 158: Attempting to load block cidr=192.168.27.192/26 host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:23.938799 containerd[1865]: 2025-09-16 04:42:23.892 [INFO][4780] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.27.192/26 host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:23.939155 containerd[1865]: 2025-09-16 04:42:23.892 [INFO][4780] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.27.192/26 handle="k8s-pod-network.5274b27c4ae5bc01569c89475d89b9b77f38c65452abb44deaed601542b4af7d" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:23.939155 containerd[1865]: 2025-09-16 04:42:23.893 [INFO][4780] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5274b27c4ae5bc01569c89475d89b9b77f38c65452abb44deaed601542b4af7d Sep 16 04:42:23.939155 containerd[1865]: 2025-09-16 04:42:23.898 [INFO][4780] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.27.192/26 handle="k8s-pod-network.5274b27c4ae5bc01569c89475d89b9b77f38c65452abb44deaed601542b4af7d" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:23.939155 containerd[1865]: 2025-09-16 04:42:23.912 [INFO][4780] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.27.194/26] block=192.168.27.192/26 handle="k8s-pod-network.5274b27c4ae5bc01569c89475d89b9b77f38c65452abb44deaed601542b4af7d" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:23.939155 containerd[1865]: 2025-09-16 04:42:23.912 [INFO][4780] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.27.194/26] handle="k8s-pod-network.5274b27c4ae5bc01569c89475d89b9b77f38c65452abb44deaed601542b4af7d" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:23.939155 containerd[1865]: 2025-09-16 04:42:23.912 [INFO][4780] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:42:23.939155 containerd[1865]: 2025-09-16 04:42:23.912 [INFO][4780] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.27.194/26] IPv6=[] ContainerID="5274b27c4ae5bc01569c89475d89b9b77f38c65452abb44deaed601542b4af7d" HandleID="k8s-pod-network.5274b27c4ae5bc01569c89475d89b9b77f38c65452abb44deaed601542b4af7d" Workload="ci--4459.0.0--n--404d4275b5-k8s-coredns--668d6bf9bc--d54s2-eth0" Sep 16 04:42:23.939273 containerd[1865]: 2025-09-16 04:42:23.914 [INFO][4768] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5274b27c4ae5bc01569c89475d89b9b77f38c65452abb44deaed601542b4af7d" Namespace="kube-system" Pod="coredns-668d6bf9bc-d54s2" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-coredns--668d6bf9bc--d54s2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--404d4275b5-k8s-coredns--668d6bf9bc--d54s2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9411887e-95b8-4feb-8af5-65296f88374a", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 41, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-404d4275b5", ContainerID:"", Pod:"coredns-668d6bf9bc-d54s2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.27.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8578ebd716d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:42:23.939273 containerd[1865]: 2025-09-16 04:42:23.914 [INFO][4768] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.27.194/32] ContainerID="5274b27c4ae5bc01569c89475d89b9b77f38c65452abb44deaed601542b4af7d" Namespace="kube-system" Pod="coredns-668d6bf9bc-d54s2" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-coredns--668d6bf9bc--d54s2-eth0" Sep 16 04:42:23.939273 containerd[1865]: 2025-09-16 04:42:23.914 [INFO][4768] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8578ebd716d ContainerID="5274b27c4ae5bc01569c89475d89b9b77f38c65452abb44deaed601542b4af7d" Namespace="kube-system" Pod="coredns-668d6bf9bc-d54s2" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-coredns--668d6bf9bc--d54s2-eth0" Sep 16 04:42:23.939273 containerd[1865]: 2025-09-16 04:42:23.918 [INFO][4768] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5274b27c4ae5bc01569c89475d89b9b77f38c65452abb44deaed601542b4af7d" Namespace="kube-system" Pod="coredns-668d6bf9bc-d54s2" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-coredns--668d6bf9bc--d54s2-eth0" Sep 16 04:42:23.939273 containerd[1865]: 2025-09-16 04:42:23.918 [INFO][4768] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5274b27c4ae5bc01569c89475d89b9b77f38c65452abb44deaed601542b4af7d" Namespace="kube-system" Pod="coredns-668d6bf9bc-d54s2" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-coredns--668d6bf9bc--d54s2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--404d4275b5-k8s-coredns--668d6bf9bc--d54s2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9411887e-95b8-4feb-8af5-65296f88374a", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 41, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-404d4275b5", ContainerID:"5274b27c4ae5bc01569c89475d89b9b77f38c65452abb44deaed601542b4af7d", Pod:"coredns-668d6bf9bc-d54s2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.27.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8578ebd716d", MAC:"2a:4c:24:35:40:79", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:42:23.939273 containerd[1865]: 2025-09-16 04:42:23.934 [INFO][4768] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5274b27c4ae5bc01569c89475d89b9b77f38c65452abb44deaed601542b4af7d" Namespace="kube-system" Pod="coredns-668d6bf9bc-d54s2" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-coredns--668d6bf9bc--d54s2-eth0" Sep 16 04:42:23.989676 containerd[1865]: time="2025-09-16T04:42:23.989624535Z" level=info msg="connecting to shim 5274b27c4ae5bc01569c89475d89b9b77f38c65452abb44deaed601542b4af7d" address="unix:///run/containerd/s/e5b333e6657e46e245d64a6b0dd4b4533e893ce0e44da8d1e2ba8e8a8e324ee1" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:42:24.013765 systemd[1]: Started cri-containerd-5274b27c4ae5bc01569c89475d89b9b77f38c65452abb44deaed601542b4af7d.scope - libcontainer container 5274b27c4ae5bc01569c89475d89b9b77f38c65452abb44deaed601542b4af7d. Sep 16 04:42:24.048595 containerd[1865]: time="2025-09-16T04:42:24.048504244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d54s2,Uid:9411887e-95b8-4feb-8af5-65296f88374a,Namespace:kube-system,Attempt:0,} returns sandbox id \"5274b27c4ae5bc01569c89475d89b9b77f38c65452abb44deaed601542b4af7d\"" Sep 16 04:42:24.051585 containerd[1865]: time="2025-09-16T04:42:24.051539029Z" level=info msg="CreateContainer within sandbox \"5274b27c4ae5bc01569c89475d89b9b77f38c65452abb44deaed601542b4af7d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 04:42:24.079656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2903997822.mount: Deactivated successfully. Sep 16 04:42:24.081190 containerd[1865]: time="2025-09-16T04:42:24.079713238Z" level=info msg="Container 0b8e3169fa6f15a7a8e50d956abd3ac17fed9314e506b08a204f8afae5eb8bbb: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:42:24.098207 containerd[1865]: time="2025-09-16T04:42:24.098174056Z" level=info msg="CreateContainer within sandbox \"5274b27c4ae5bc01569c89475d89b9b77f38c65452abb44deaed601542b4af7d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0b8e3169fa6f15a7a8e50d956abd3ac17fed9314e506b08a204f8afae5eb8bbb\"" Sep 16 04:42:24.098864 containerd[1865]: time="2025-09-16T04:42:24.098835012Z" level=info msg="StartContainer for \"0b8e3169fa6f15a7a8e50d956abd3ac17fed9314e506b08a204f8afae5eb8bbb\"" Sep 16 04:42:24.100550 containerd[1865]: time="2025-09-16T04:42:24.100526814Z" level=info msg="connecting to shim 0b8e3169fa6f15a7a8e50d956abd3ac17fed9314e506b08a204f8afae5eb8bbb" address="unix:///run/containerd/s/e5b333e6657e46e245d64a6b0dd4b4533e893ce0e44da8d1e2ba8e8a8e324ee1" protocol=ttrpc version=3 Sep 16 04:42:24.118735 systemd[1]: Started cri-containerd-0b8e3169fa6f15a7a8e50d956abd3ac17fed9314e506b08a204f8afae5eb8bbb.scope - libcontainer container 0b8e3169fa6f15a7a8e50d956abd3ac17fed9314e506b08a204f8afae5eb8bbb. Sep 16 04:42:24.146862 containerd[1865]: time="2025-09-16T04:42:24.146826815Z" level=info msg="StartContainer for \"0b8e3169fa6f15a7a8e50d956abd3ac17fed9314e506b08a204f8afae5eb8bbb\" returns successfully" Sep 16 04:42:24.396093 kubelet[3280]: I0916 04:42:24.395791 3280 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:42:24.456549 containerd[1865]: time="2025-09-16T04:42:24.456511962Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f4fee02196b9e4fd8828c2aa0a9ce23a2673f599d1b90bd498ee291154737863\" id:\"b403a3ff4aadb97937dbd724053faa095299105a320ef798e9886b0c0cdfad52\" pid:4889 exited_at:{seconds:1757997744 nanos:455830510}" Sep 16 04:42:24.525243 containerd[1865]: time="2025-09-16T04:42:24.525204801Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f4fee02196b9e4fd8828c2aa0a9ce23a2673f599d1b90bd498ee291154737863\" id:\"1d233fce60e26b267ed070e825fe47d183d9be35bd9caa30a18df102b62cd80f\" pid:4912 exited_at:{seconds:1757997744 nanos:524820646}" Sep 16 04:42:24.824634 containerd[1865]: time="2025-09-16T04:42:24.824373790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kp8nh,Uid:10cac60e-b646-4b9b-9967-e70f95c1e33f,Namespace:calico-system,Attempt:0,}" Sep 16 04:42:24.825119 containerd[1865]: time="2025-09-16T04:42:24.825013641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68cdb9d4d4-g2x98,Uid:0cfdadd1-dfc9-4ba8-a56a-b44112d278d5,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:42:24.825119 containerd[1865]: time="2025-09-16T04:42:24.825073387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-xmq74,Uid:06dcaf54-5135-46f9-a579-681a9ddff71a,Namespace:calico-system,Attempt:0,}" Sep 16 04:42:24.956304 systemd-networkd[1693]: calie970514e39e: Link UP Sep 16 04:42:24.957561 systemd-networkd[1693]: calie970514e39e: Gained carrier Sep 16 04:42:24.984515 containerd[1865]: 2025-09-16 04:42:24.870 [INFO][4924] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.0.0--n--404d4275b5-k8s-csi--node--driver--kp8nh-eth0 csi-node-driver- calico-system 10cac60e-b646-4b9b-9967-e70f95c1e33f 698 0 2025-09-16 04:42:02 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.0.0-n-404d4275b5 csi-node-driver-kp8nh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie970514e39e [] [] }} ContainerID="37a481d6c7fbdd02af5755e828815d90a2d49bd2a5d5669e92601c6da8b8b5d2" Namespace="calico-system" Pod="csi-node-driver-kp8nh" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-csi--node--driver--kp8nh-" Sep 16 04:42:24.984515 containerd[1865]: 2025-09-16 04:42:24.870 [INFO][4924] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="37a481d6c7fbdd02af5755e828815d90a2d49bd2a5d5669e92601c6da8b8b5d2" Namespace="calico-system" Pod="csi-node-driver-kp8nh" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-csi--node--driver--kp8nh-eth0" Sep 16 04:42:24.984515 containerd[1865]: 2025-09-16 04:42:24.894 [INFO][4955] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="37a481d6c7fbdd02af5755e828815d90a2d49bd2a5d5669e92601c6da8b8b5d2" HandleID="k8s-pod-network.37a481d6c7fbdd02af5755e828815d90a2d49bd2a5d5669e92601c6da8b8b5d2" Workload="ci--4459.0.0--n--404d4275b5-k8s-csi--node--driver--kp8nh-eth0" Sep 16 04:42:24.984515 containerd[1865]: 2025-09-16 04:42:24.894 [INFO][4955] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="37a481d6c7fbdd02af5755e828815d90a2d49bd2a5d5669e92601c6da8b8b5d2" HandleID="k8s-pod-network.37a481d6c7fbdd02af5755e828815d90a2d49bd2a5d5669e92601c6da8b8b5d2" Workload="ci--4459.0.0--n--404d4275b5-k8s-csi--node--driver--kp8nh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.0.0-n-404d4275b5", "pod":"csi-node-driver-kp8nh", "timestamp":"2025-09-16 04:42:24.894218055 +0000 UTC"}, Hostname:"ci-4459.0.0-n-404d4275b5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:42:24.984515 containerd[1865]: 2025-09-16 04:42:24.894 [INFO][4955] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:42:24.984515 containerd[1865]: 2025-09-16 04:42:24.894 [INFO][4955] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:42:24.984515 containerd[1865]: 2025-09-16 04:42:24.894 [INFO][4955] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.0.0-n-404d4275b5' Sep 16 04:42:24.984515 containerd[1865]: 2025-09-16 04:42:24.904 [INFO][4955] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.37a481d6c7fbdd02af5755e828815d90a2d49bd2a5d5669e92601c6da8b8b5d2" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:24.984515 containerd[1865]: 2025-09-16 04:42:24.910 [INFO][4955] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:24.984515 containerd[1865]: 2025-09-16 04:42:24.915 [INFO][4955] ipam/ipam.go 511: Trying affinity for 192.168.27.192/26 host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:24.984515 containerd[1865]: 2025-09-16 04:42:24.919 [INFO][4955] ipam/ipam.go 158: Attempting to load block cidr=192.168.27.192/26 host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:24.984515 containerd[1865]: 2025-09-16 04:42:24.922 [INFO][4955] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.27.192/26 host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:24.984515 containerd[1865]: 2025-09-16 04:42:24.922 [INFO][4955] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.27.192/26 handle="k8s-pod-network.37a481d6c7fbdd02af5755e828815d90a2d49bd2a5d5669e92601c6da8b8b5d2" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:24.984515 containerd[1865]: 2025-09-16 04:42:24.926 [INFO][4955] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.37a481d6c7fbdd02af5755e828815d90a2d49bd2a5d5669e92601c6da8b8b5d2 Sep 16 04:42:24.984515 containerd[1865]: 2025-09-16 04:42:24.933 [INFO][4955] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.27.192/26 handle="k8s-pod-network.37a481d6c7fbdd02af5755e828815d90a2d49bd2a5d5669e92601c6da8b8b5d2" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:24.984515 containerd[1865]: 2025-09-16 04:42:24.945 [INFO][4955] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.27.195/26] block=192.168.27.192/26 handle="k8s-pod-network.37a481d6c7fbdd02af5755e828815d90a2d49bd2a5d5669e92601c6da8b8b5d2" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:24.984515 containerd[1865]: 2025-09-16 04:42:24.945 [INFO][4955] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.27.195/26] handle="k8s-pod-network.37a481d6c7fbdd02af5755e828815d90a2d49bd2a5d5669e92601c6da8b8b5d2" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:24.984515 containerd[1865]: 2025-09-16 04:42:24.945 [INFO][4955] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:42:24.984515 containerd[1865]: 2025-09-16 04:42:24.945 [INFO][4955] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.27.195/26] IPv6=[] ContainerID="37a481d6c7fbdd02af5755e828815d90a2d49bd2a5d5669e92601c6da8b8b5d2" HandleID="k8s-pod-network.37a481d6c7fbdd02af5755e828815d90a2d49bd2a5d5669e92601c6da8b8b5d2" Workload="ci--4459.0.0--n--404d4275b5-k8s-csi--node--driver--kp8nh-eth0" Sep 16 04:42:24.985004 containerd[1865]: 2025-09-16 04:42:24.950 [INFO][4924] cni-plugin/k8s.go 418: Populated endpoint ContainerID="37a481d6c7fbdd02af5755e828815d90a2d49bd2a5d5669e92601c6da8b8b5d2" Namespace="calico-system" Pod="csi-node-driver-kp8nh" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-csi--node--driver--kp8nh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--404d4275b5-k8s-csi--node--driver--kp8nh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"10cac60e-b646-4b9b-9967-e70f95c1e33f", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 42, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-404d4275b5", ContainerID:"", Pod:"csi-node-driver-kp8nh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.27.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie970514e39e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:42:24.985004 containerd[1865]: 2025-09-16 04:42:24.950 [INFO][4924] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.27.195/32] ContainerID="37a481d6c7fbdd02af5755e828815d90a2d49bd2a5d5669e92601c6da8b8b5d2" Namespace="calico-system" Pod="csi-node-driver-kp8nh" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-csi--node--driver--kp8nh-eth0" Sep 16 04:42:24.985004 containerd[1865]: 2025-09-16 04:42:24.950 [INFO][4924] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie970514e39e ContainerID="37a481d6c7fbdd02af5755e828815d90a2d49bd2a5d5669e92601c6da8b8b5d2" Namespace="calico-system" Pod="csi-node-driver-kp8nh" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-csi--node--driver--kp8nh-eth0" Sep 16 04:42:24.985004 containerd[1865]: 2025-09-16 04:42:24.958 [INFO][4924] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="37a481d6c7fbdd02af5755e828815d90a2d49bd2a5d5669e92601c6da8b8b5d2" Namespace="calico-system" Pod="csi-node-driver-kp8nh" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-csi--node--driver--kp8nh-eth0" Sep 16 04:42:24.985004 containerd[1865]: 2025-09-16 04:42:24.959 [INFO][4924] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="37a481d6c7fbdd02af5755e828815d90a2d49bd2a5d5669e92601c6da8b8b5d2" Namespace="calico-system" Pod="csi-node-driver-kp8nh" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-csi--node--driver--kp8nh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--404d4275b5-k8s-csi--node--driver--kp8nh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"10cac60e-b646-4b9b-9967-e70f95c1e33f", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 42, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-404d4275b5", ContainerID:"37a481d6c7fbdd02af5755e828815d90a2d49bd2a5d5669e92601c6da8b8b5d2", Pod:"csi-node-driver-kp8nh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.27.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie970514e39e", MAC:"76:71:ce:39:c5:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:42:24.985004 containerd[1865]: 2025-09-16 04:42:24.980 [INFO][4924] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="37a481d6c7fbdd02af5755e828815d90a2d49bd2a5d5669e92601c6da8b8b5d2" Namespace="calico-system" Pod="csi-node-driver-kp8nh" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-csi--node--driver--kp8nh-eth0" Sep 16 04:42:25.008761 systemd-networkd[1693]: cali8578ebd716d: Gained IPv6LL Sep 16 04:42:25.039446 containerd[1865]: time="2025-09-16T04:42:25.039301761Z" level=info msg="connecting to shim 37a481d6c7fbdd02af5755e828815d90a2d49bd2a5d5669e92601c6da8b8b5d2" address="unix:///run/containerd/s/9b0acbd702e74908cbb770d5181ad14ea43ef85703c74eacb0fc610e1f4acb93" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:42:25.058645 systemd-networkd[1693]: cali07ebd3da009: Link UP Sep 16 04:42:25.059708 systemd-networkd[1693]: cali07ebd3da009: Gained carrier Sep 16 04:42:25.072752 systemd[1]: Started cri-containerd-37a481d6c7fbdd02af5755e828815d90a2d49bd2a5d5669e92601c6da8b8b5d2.scope - libcontainer container 37a481d6c7fbdd02af5755e828815d90a2d49bd2a5d5669e92601c6da8b8b5d2. Sep 16 04:42:25.080844 containerd[1865]: 2025-09-16 04:42:24.907 [INFO][4934] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--68cdb9d4d4--g2x98-eth0 calico-apiserver-68cdb9d4d4- calico-apiserver 0cfdadd1-dfc9-4ba8-a56a-b44112d278d5 802 0 2025-09-16 04:42:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68cdb9d4d4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.0.0-n-404d4275b5 calico-apiserver-68cdb9d4d4-g2x98 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali07ebd3da009 [] [] }} ContainerID="466fdc08fb62e9e9cc8a14a51adc81934dfdba1f7f8548f3c403c111142b765f" Namespace="calico-apiserver" Pod="calico-apiserver-68cdb9d4d4-g2x98" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--68cdb9d4d4--g2x98-" Sep 16 04:42:25.080844 containerd[1865]: 2025-09-16 04:42:24.907 [INFO][4934] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="466fdc08fb62e9e9cc8a14a51adc81934dfdba1f7f8548f3c403c111142b765f" Namespace="calico-apiserver" Pod="calico-apiserver-68cdb9d4d4-g2x98" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--68cdb9d4d4--g2x98-eth0" Sep 16 04:42:25.080844 containerd[1865]: 2025-09-16 04:42:24.940 [INFO][4966] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="466fdc08fb62e9e9cc8a14a51adc81934dfdba1f7f8548f3c403c111142b765f" HandleID="k8s-pod-network.466fdc08fb62e9e9cc8a14a51adc81934dfdba1f7f8548f3c403c111142b765f" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--68cdb9d4d4--g2x98-eth0" Sep 16 04:42:25.080844 containerd[1865]: 2025-09-16 04:42:24.940 [INFO][4966] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="466fdc08fb62e9e9cc8a14a51adc81934dfdba1f7f8548f3c403c111142b765f" HandleID="k8s-pod-network.466fdc08fb62e9e9cc8a14a51adc81934dfdba1f7f8548f3c403c111142b765f" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--68cdb9d4d4--g2x98-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3120), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.0.0-n-404d4275b5", "pod":"calico-apiserver-68cdb9d4d4-g2x98", "timestamp":"2025-09-16 04:42:24.939467857 +0000 UTC"}, Hostname:"ci-4459.0.0-n-404d4275b5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:42:25.080844 containerd[1865]: 2025-09-16 04:42:24.940 [INFO][4966] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:42:25.080844 containerd[1865]: 2025-09-16 04:42:24.945 [INFO][4966] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:42:25.080844 containerd[1865]: 2025-09-16 04:42:24.946 [INFO][4966] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.0.0-n-404d4275b5' Sep 16 04:42:25.080844 containerd[1865]: 2025-09-16 04:42:25.005 [INFO][4966] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.466fdc08fb62e9e9cc8a14a51adc81934dfdba1f7f8548f3c403c111142b765f" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:25.080844 containerd[1865]: 2025-09-16 04:42:25.015 [INFO][4966] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:25.080844 containerd[1865]: 2025-09-16 04:42:25.022 [INFO][4966] ipam/ipam.go 511: Trying affinity for 192.168.27.192/26 host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:25.080844 containerd[1865]: 2025-09-16 04:42:25.024 [INFO][4966] ipam/ipam.go 158: Attempting to load block cidr=192.168.27.192/26 host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:25.080844 containerd[1865]: 2025-09-16 04:42:25.026 [INFO][4966] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.27.192/26 host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:25.080844 containerd[1865]: 2025-09-16 04:42:25.026 [INFO][4966] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.27.192/26 handle="k8s-pod-network.466fdc08fb62e9e9cc8a14a51adc81934dfdba1f7f8548f3c403c111142b765f" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:25.080844 containerd[1865]: 2025-09-16 04:42:25.027 [INFO][4966] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.466fdc08fb62e9e9cc8a14a51adc81934dfdba1f7f8548f3c403c111142b765f Sep 16 04:42:25.080844 containerd[1865]: 2025-09-16 04:42:25.035 [INFO][4966] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.27.192/26 handle="k8s-pod-network.466fdc08fb62e9e9cc8a14a51adc81934dfdba1f7f8548f3c403c111142b765f" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:25.080844 containerd[1865]: 2025-09-16 04:42:25.044 [INFO][4966] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.27.196/26] block=192.168.27.192/26 handle="k8s-pod-network.466fdc08fb62e9e9cc8a14a51adc81934dfdba1f7f8548f3c403c111142b765f" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:25.080844 containerd[1865]: 2025-09-16 04:42:25.044 [INFO][4966] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.27.196/26] handle="k8s-pod-network.466fdc08fb62e9e9cc8a14a51adc81934dfdba1f7f8548f3c403c111142b765f" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:25.080844 containerd[1865]: 2025-09-16 04:42:25.044 [INFO][4966] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:42:25.080844 containerd[1865]: 2025-09-16 04:42:25.044 [INFO][4966] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.27.196/26] IPv6=[] ContainerID="466fdc08fb62e9e9cc8a14a51adc81934dfdba1f7f8548f3c403c111142b765f" HandleID="k8s-pod-network.466fdc08fb62e9e9cc8a14a51adc81934dfdba1f7f8548f3c403c111142b765f" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--68cdb9d4d4--g2x98-eth0" Sep 16 04:42:25.081214 containerd[1865]: 2025-09-16 04:42:25.052 [INFO][4934] cni-plugin/k8s.go 418: Populated endpoint ContainerID="466fdc08fb62e9e9cc8a14a51adc81934dfdba1f7f8548f3c403c111142b765f" Namespace="calico-apiserver" Pod="calico-apiserver-68cdb9d4d4-g2x98" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--68cdb9d4d4--g2x98-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--68cdb9d4d4--g2x98-eth0", GenerateName:"calico-apiserver-68cdb9d4d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"0cfdadd1-dfc9-4ba8-a56a-b44112d278d5", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 42, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68cdb9d4d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-404d4275b5", ContainerID:"", Pod:"calico-apiserver-68cdb9d4d4-g2x98", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.27.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07ebd3da009", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:42:25.081214 containerd[1865]: 2025-09-16 04:42:25.052 [INFO][4934] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.27.196/32] ContainerID="466fdc08fb62e9e9cc8a14a51adc81934dfdba1f7f8548f3c403c111142b765f" Namespace="calico-apiserver" Pod="calico-apiserver-68cdb9d4d4-g2x98" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--68cdb9d4d4--g2x98-eth0" Sep 16 04:42:25.081214 containerd[1865]: 2025-09-16 04:42:25.052 [INFO][4934] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07ebd3da009 ContainerID="466fdc08fb62e9e9cc8a14a51adc81934dfdba1f7f8548f3c403c111142b765f" Namespace="calico-apiserver" Pod="calico-apiserver-68cdb9d4d4-g2x98" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--68cdb9d4d4--g2x98-eth0" Sep 16 04:42:25.081214 containerd[1865]: 2025-09-16 04:42:25.060 [INFO][4934] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="466fdc08fb62e9e9cc8a14a51adc81934dfdba1f7f8548f3c403c111142b765f" Namespace="calico-apiserver" Pod="calico-apiserver-68cdb9d4d4-g2x98" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--68cdb9d4d4--g2x98-eth0" Sep 16 04:42:25.081214 containerd[1865]: 2025-09-16 04:42:25.060 [INFO][4934] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="466fdc08fb62e9e9cc8a14a51adc81934dfdba1f7f8548f3c403c111142b765f" Namespace="calico-apiserver" Pod="calico-apiserver-68cdb9d4d4-g2x98" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--68cdb9d4d4--g2x98-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--68cdb9d4d4--g2x98-eth0", GenerateName:"calico-apiserver-68cdb9d4d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"0cfdadd1-dfc9-4ba8-a56a-b44112d278d5", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 42, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68cdb9d4d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-404d4275b5", ContainerID:"466fdc08fb62e9e9cc8a14a51adc81934dfdba1f7f8548f3c403c111142b765f", Pod:"calico-apiserver-68cdb9d4d4-g2x98", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.27.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07ebd3da009", MAC:"02:8c:29:13:3a:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:42:25.081214 containerd[1865]: 2025-09-16 04:42:25.076 [INFO][4934] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="466fdc08fb62e9e9cc8a14a51adc81934dfdba1f7f8548f3c403c111142b765f" Namespace="calico-apiserver" Pod="calico-apiserver-68cdb9d4d4-g2x98" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--68cdb9d4d4--g2x98-eth0" Sep 16 04:42:25.090809 kubelet[3280]: I0916 04:42:25.089531 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-d54s2" podStartSLOduration=35.089516157 podStartE2EDuration="35.089516157s" podCreationTimestamp="2025-09-16 04:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:42:25.089381713 +0000 UTC m=+40.333567225" watchObservedRunningTime="2025-09-16 04:42:25.089516157 +0000 UTC m=+40.333701645" Sep 16 04:42:25.116351 containerd[1865]: time="2025-09-16T04:42:25.116054878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kp8nh,Uid:10cac60e-b646-4b9b-9967-e70f95c1e33f,Namespace:calico-system,Attempt:0,} returns sandbox id \"37a481d6c7fbdd02af5755e828815d90a2d49bd2a5d5669e92601c6da8b8b5d2\"" Sep 16 04:42:25.119062 containerd[1865]: time="2025-09-16T04:42:25.119026110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 16 04:42:25.144087 containerd[1865]: time="2025-09-16T04:42:25.143964575Z" level=info msg="connecting to shim 466fdc08fb62e9e9cc8a14a51adc81934dfdba1f7f8548f3c403c111142b765f" address="unix:///run/containerd/s/6b2276c23d9489feca0529db138a852f14b37d892d28ade64c7907b003b6f809" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:42:25.170911 systemd[1]: Started cri-containerd-466fdc08fb62e9e9cc8a14a51adc81934dfdba1f7f8548f3c403c111142b765f.scope - libcontainer container 466fdc08fb62e9e9cc8a14a51adc81934dfdba1f7f8548f3c403c111142b765f. Sep 16 04:42:25.177129 systemd-networkd[1693]: calib1673ecacd7: Link UP Sep 16 04:42:25.178330 systemd-networkd[1693]: calib1673ecacd7: Gained carrier Sep 16 04:42:25.202920 containerd[1865]: 2025-09-16 04:42:24.926 [INFO][4943] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.0.0--n--404d4275b5-k8s-goldmane--54d579b49d--xmq74-eth0 goldmane-54d579b49d- calico-system 06dcaf54-5135-46f9-a579-681a9ddff71a 805 0 2025-09-16 04:42:02 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.0.0-n-404d4275b5 goldmane-54d579b49d-xmq74 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib1673ecacd7 [] [] }} ContainerID="19446881c97ad4f0b5c6e993541fa3705360f8f32b1e750341d250df60fa0a1b" Namespace="calico-system" Pod="goldmane-54d579b49d-xmq74" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-goldmane--54d579b49d--xmq74-" Sep 16 04:42:25.202920 containerd[1865]: 2025-09-16 04:42:24.927 [INFO][4943] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="19446881c97ad4f0b5c6e993541fa3705360f8f32b1e750341d250df60fa0a1b" Namespace="calico-system" Pod="goldmane-54d579b49d-xmq74" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-goldmane--54d579b49d--xmq74-eth0" Sep 16 04:42:25.202920 containerd[1865]: 2025-09-16 04:42:24.973 [INFO][4972] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="19446881c97ad4f0b5c6e993541fa3705360f8f32b1e750341d250df60fa0a1b" HandleID="k8s-pod-network.19446881c97ad4f0b5c6e993541fa3705360f8f32b1e750341d250df60fa0a1b" Workload="ci--4459.0.0--n--404d4275b5-k8s-goldmane--54d579b49d--xmq74-eth0" Sep 16 04:42:25.202920 containerd[1865]: 2025-09-16 04:42:24.974 [INFO][4972] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="19446881c97ad4f0b5c6e993541fa3705360f8f32b1e750341d250df60fa0a1b" HandleID="k8s-pod-network.19446881c97ad4f0b5c6e993541fa3705360f8f32b1e750341d250df60fa0a1b" Workload="ci--4459.0.0--n--404d4275b5-k8s-goldmane--54d579b49d--xmq74-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb640), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.0.0-n-404d4275b5", "pod":"goldmane-54d579b49d-xmq74", "timestamp":"2025-09-16 04:42:24.973267704 +0000 UTC"}, Hostname:"ci-4459.0.0-n-404d4275b5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:42:25.202920 containerd[1865]: 2025-09-16 04:42:24.974 [INFO][4972] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:42:25.202920 containerd[1865]: 2025-09-16 04:42:25.044 [INFO][4972] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:42:25.202920 containerd[1865]: 2025-09-16 04:42:25.044 [INFO][4972] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.0.0-n-404d4275b5' Sep 16 04:42:25.202920 containerd[1865]: 2025-09-16 04:42:25.105 [INFO][4972] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.19446881c97ad4f0b5c6e993541fa3705360f8f32b1e750341d250df60fa0a1b" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:25.202920 containerd[1865]: 2025-09-16 04:42:25.116 [INFO][4972] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:25.202920 containerd[1865]: 2025-09-16 04:42:25.142 [INFO][4972] ipam/ipam.go 511: Trying affinity for 192.168.27.192/26 host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:25.202920 containerd[1865]: 2025-09-16 04:42:25.145 [INFO][4972] ipam/ipam.go 158: Attempting to load block cidr=192.168.27.192/26 host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:25.202920 containerd[1865]: 2025-09-16 04:42:25.148 [INFO][4972] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.27.192/26 host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:25.202920 containerd[1865]: 2025-09-16 04:42:25.148 [INFO][4972] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.27.192/26 handle="k8s-pod-network.19446881c97ad4f0b5c6e993541fa3705360f8f32b1e750341d250df60fa0a1b" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:25.202920 containerd[1865]: 2025-09-16 04:42:25.151 [INFO][4972] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.19446881c97ad4f0b5c6e993541fa3705360f8f32b1e750341d250df60fa0a1b Sep 16 04:42:25.202920 containerd[1865]: 2025-09-16 04:42:25.157 [INFO][4972] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.27.192/26 handle="k8s-pod-network.19446881c97ad4f0b5c6e993541fa3705360f8f32b1e750341d250df60fa0a1b" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:25.202920 containerd[1865]: 2025-09-16 04:42:25.168 [INFO][4972] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.27.197/26] block=192.168.27.192/26 handle="k8s-pod-network.19446881c97ad4f0b5c6e993541fa3705360f8f32b1e750341d250df60fa0a1b" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:25.202920 containerd[1865]: 2025-09-16 04:42:25.168 [INFO][4972] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.27.197/26] handle="k8s-pod-network.19446881c97ad4f0b5c6e993541fa3705360f8f32b1e750341d250df60fa0a1b" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:25.202920 containerd[1865]: 2025-09-16 04:42:25.168 [INFO][4972] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:42:25.202920 containerd[1865]: 2025-09-16 04:42:25.168 [INFO][4972] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.27.197/26] IPv6=[] ContainerID="19446881c97ad4f0b5c6e993541fa3705360f8f32b1e750341d250df60fa0a1b" HandleID="k8s-pod-network.19446881c97ad4f0b5c6e993541fa3705360f8f32b1e750341d250df60fa0a1b" Workload="ci--4459.0.0--n--404d4275b5-k8s-goldmane--54d579b49d--xmq74-eth0" Sep 16 04:42:25.203356 containerd[1865]: 2025-09-16 04:42:25.170 [INFO][4943] cni-plugin/k8s.go 418: Populated endpoint ContainerID="19446881c97ad4f0b5c6e993541fa3705360f8f32b1e750341d250df60fa0a1b" Namespace="calico-system" Pod="goldmane-54d579b49d-xmq74" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-goldmane--54d579b49d--xmq74-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--404d4275b5-k8s-goldmane--54d579b49d--xmq74-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"06dcaf54-5135-46f9-a579-681a9ddff71a", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 42, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-404d4275b5", ContainerID:"", Pod:"goldmane-54d579b49d-xmq74", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.27.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib1673ecacd7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:42:25.203356 containerd[1865]: 2025-09-16 04:42:25.170 [INFO][4943] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.27.197/32] ContainerID="19446881c97ad4f0b5c6e993541fa3705360f8f32b1e750341d250df60fa0a1b" Namespace="calico-system" Pod="goldmane-54d579b49d-xmq74" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-goldmane--54d579b49d--xmq74-eth0" Sep 16 04:42:25.203356 containerd[1865]: 2025-09-16 04:42:25.170 [INFO][4943] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib1673ecacd7 ContainerID="19446881c97ad4f0b5c6e993541fa3705360f8f32b1e750341d250df60fa0a1b" Namespace="calico-system" Pod="goldmane-54d579b49d-xmq74" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-goldmane--54d579b49d--xmq74-eth0" Sep 16 04:42:25.203356 containerd[1865]: 2025-09-16 04:42:25.179 [INFO][4943] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="19446881c97ad4f0b5c6e993541fa3705360f8f32b1e750341d250df60fa0a1b" Namespace="calico-system" Pod="goldmane-54d579b49d-xmq74" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-goldmane--54d579b49d--xmq74-eth0" Sep 16 04:42:25.203356 containerd[1865]: 2025-09-16 04:42:25.179 [INFO][4943] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="19446881c97ad4f0b5c6e993541fa3705360f8f32b1e750341d250df60fa0a1b" Namespace="calico-system" Pod="goldmane-54d579b49d-xmq74" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-goldmane--54d579b49d--xmq74-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--404d4275b5-k8s-goldmane--54d579b49d--xmq74-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"06dcaf54-5135-46f9-a579-681a9ddff71a", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 42, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-404d4275b5", ContainerID:"19446881c97ad4f0b5c6e993541fa3705360f8f32b1e750341d250df60fa0a1b", Pod:"goldmane-54d579b49d-xmq74", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.27.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib1673ecacd7", MAC:"2e:61:f8:9b:d5:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:42:25.203356 containerd[1865]: 2025-09-16 04:42:25.198 [INFO][4943] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="19446881c97ad4f0b5c6e993541fa3705360f8f32b1e750341d250df60fa0a1b" Namespace="calico-system" Pod="goldmane-54d579b49d-xmq74" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-goldmane--54d579b49d--xmq74-eth0" Sep 16 04:42:25.216418 containerd[1865]: time="2025-09-16T04:42:25.216379548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68cdb9d4d4-g2x98,Uid:0cfdadd1-dfc9-4ba8-a56a-b44112d278d5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"466fdc08fb62e9e9cc8a14a51adc81934dfdba1f7f8548f3c403c111142b765f\"" Sep 16 04:42:25.257104 containerd[1865]: time="2025-09-16T04:42:25.256686948Z" level=info msg="connecting to shim 19446881c97ad4f0b5c6e993541fa3705360f8f32b1e750341d250df60fa0a1b" address="unix:///run/containerd/s/c70d0530a706cd07c397163baeb1d8522126bc97a1be10774a978727a91d52da" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:42:25.273789 systemd[1]: Started cri-containerd-19446881c97ad4f0b5c6e993541fa3705360f8f32b1e750341d250df60fa0a1b.scope - libcontainer container 19446881c97ad4f0b5c6e993541fa3705360f8f32b1e750341d250df60fa0a1b. Sep 16 04:42:25.303375 containerd[1865]: time="2025-09-16T04:42:25.303342663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-xmq74,Uid:06dcaf54-5135-46f9-a579-681a9ddff71a,Namespace:calico-system,Attempt:0,} returns sandbox id \"19446881c97ad4f0b5c6e993541fa3705360f8f32b1e750341d250df60fa0a1b\"" Sep 16 04:42:25.824146 containerd[1865]: time="2025-09-16T04:42:25.823869826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b46d565d6-2bmc2,Uid:6fdb1262-eead-4398-8cae-b3e060954c4e,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:42:25.824146 containerd[1865]: time="2025-09-16T04:42:25.824131489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b46d565d6-8h62f,Uid:c870852e-7bb4-4f75-b61b-79ac5929ece7,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:42:25.948869 systemd-networkd[1693]: calif300e3586e8: Link UP Sep 16 04:42:25.950361 systemd-networkd[1693]: calif300e3586e8: Gained carrier Sep 16 04:42:25.968777 containerd[1865]: 2025-09-16 04:42:25.889 [INFO][5158] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--2bmc2-eth0 calico-apiserver-7b46d565d6- calico-apiserver 6fdb1262-eead-4398-8cae-b3e060954c4e 797 0 2025-09-16 04:41:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b46d565d6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.0.0-n-404d4275b5 calico-apiserver-7b46d565d6-2bmc2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif300e3586e8 [] [] }} ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" Namespace="calico-apiserver" Pod="calico-apiserver-7b46d565d6-2bmc2" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--2bmc2-" Sep 16 04:42:25.968777 containerd[1865]: 2025-09-16 04:42:25.889 [INFO][5158] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" Namespace="calico-apiserver" Pod="calico-apiserver-7b46d565d6-2bmc2" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--2bmc2-eth0" Sep 16 04:42:25.968777 containerd[1865]: 2025-09-16 04:42:25.908 [INFO][5184] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" HandleID="k8s-pod-network.fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--2bmc2-eth0" Sep 16 04:42:25.968777 containerd[1865]: 2025-09-16 04:42:25.908 [INFO][5184] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" HandleID="k8s-pod-network.fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--2bmc2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2ff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.0.0-n-404d4275b5", "pod":"calico-apiserver-7b46d565d6-2bmc2", "timestamp":"2025-09-16 04:42:25.908024059 +0000 UTC"}, Hostname:"ci-4459.0.0-n-404d4275b5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:42:25.968777 containerd[1865]: 2025-09-16 04:42:25.908 [INFO][5184] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:42:25.968777 containerd[1865]: 2025-09-16 04:42:25.908 [INFO][5184] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:42:25.968777 containerd[1865]: 2025-09-16 04:42:25.908 [INFO][5184] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.0.0-n-404d4275b5' Sep 16 04:42:25.968777 containerd[1865]: 2025-09-16 04:42:25.914 [INFO][5184] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:25.968777 containerd[1865]: 2025-09-16 04:42:25.917 [INFO][5184] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:25.968777 containerd[1865]: 2025-09-16 04:42:25.921 [INFO][5184] ipam/ipam.go 511: Trying affinity for 192.168.27.192/26 host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:25.968777 containerd[1865]: 2025-09-16 04:42:25.922 [INFO][5184] ipam/ipam.go 158: Attempting to load block cidr=192.168.27.192/26 host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:25.968777 containerd[1865]: 2025-09-16 04:42:25.925 [INFO][5184] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.27.192/26 host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:25.968777 containerd[1865]: 2025-09-16 04:42:25.925 [INFO][5184] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.27.192/26 handle="k8s-pod-network.fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:25.968777 containerd[1865]: 2025-09-16 04:42:25.926 [INFO][5184] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec Sep 16 04:42:25.968777 containerd[1865]: 2025-09-16 04:42:25.931 [INFO][5184] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.27.192/26 handle="k8s-pod-network.fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:25.968777 containerd[1865]: 2025-09-16 04:42:25.939 [INFO][5184] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.27.198/26] block=192.168.27.192/26 handle="k8s-pod-network.fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:25.968777 containerd[1865]: 2025-09-16 04:42:25.939 [INFO][5184] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.27.198/26] handle="k8s-pod-network.fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:25.968777 containerd[1865]: 2025-09-16 04:42:25.939 [INFO][5184] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:42:25.968777 containerd[1865]: 2025-09-16 04:42:25.939 [INFO][5184] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.27.198/26] IPv6=[] ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" HandleID="k8s-pod-network.fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--2bmc2-eth0" Sep 16 04:42:25.969509 containerd[1865]: 2025-09-16 04:42:25.942 [INFO][5158] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" Namespace="calico-apiserver" Pod="calico-apiserver-7b46d565d6-2bmc2" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--2bmc2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--2bmc2-eth0", GenerateName:"calico-apiserver-7b46d565d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"6fdb1262-eead-4398-8cae-b3e060954c4e", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 41, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b46d565d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-404d4275b5", ContainerID:"", Pod:"calico-apiserver-7b46d565d6-2bmc2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.27.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif300e3586e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:42:25.969509 containerd[1865]: 2025-09-16 04:42:25.943 [INFO][5158] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.27.198/32] ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" Namespace="calico-apiserver" Pod="calico-apiserver-7b46d565d6-2bmc2" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--2bmc2-eth0" Sep 16 04:42:25.969509 containerd[1865]: 2025-09-16 04:42:25.943 [INFO][5158] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif300e3586e8 ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" Namespace="calico-apiserver" Pod="calico-apiserver-7b46d565d6-2bmc2" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--2bmc2-eth0" Sep 16 04:42:25.969509 containerd[1865]: 2025-09-16 04:42:25.950 [INFO][5158] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" Namespace="calico-apiserver" Pod="calico-apiserver-7b46d565d6-2bmc2" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--2bmc2-eth0" Sep 16 04:42:25.969509 containerd[1865]: 2025-09-16 04:42:25.952 [INFO][5158] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" Namespace="calico-apiserver" Pod="calico-apiserver-7b46d565d6-2bmc2" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--2bmc2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--2bmc2-eth0", GenerateName:"calico-apiserver-7b46d565d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"6fdb1262-eead-4398-8cae-b3e060954c4e", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 41, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b46d565d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-404d4275b5", ContainerID:"fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec", Pod:"calico-apiserver-7b46d565d6-2bmc2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.27.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif300e3586e8", MAC:"de:33:c1:59:32:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:42:25.969509 containerd[1865]: 2025-09-16 04:42:25.966 [INFO][5158] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" Namespace="calico-apiserver" Pod="calico-apiserver-7b46d565d6-2bmc2" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--2bmc2-eth0" Sep 16 04:42:26.049752 systemd-networkd[1693]: cali45c660ab8c2: Link UP Sep 16 04:42:26.050359 systemd-networkd[1693]: cali45c660ab8c2: Gained carrier Sep 16 04:42:26.072335 containerd[1865]: 2025-09-16 04:42:25.941 [INFO][5168] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--8h62f-eth0 calico-apiserver-7b46d565d6- calico-apiserver c870852e-7bb4-4f75-b61b-79ac5929ece7 804 0 2025-09-16 04:41:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b46d565d6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.0.0-n-404d4275b5 calico-apiserver-7b46d565d6-8h62f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali45c660ab8c2 [] [] }} ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" Namespace="calico-apiserver" Pod="calico-apiserver-7b46d565d6-8h62f" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--8h62f-" Sep 16 04:42:26.072335 containerd[1865]: 2025-09-16 04:42:25.941 [INFO][5168] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" Namespace="calico-apiserver" Pod="calico-apiserver-7b46d565d6-8h62f" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--8h62f-eth0" Sep 16 04:42:26.072335 containerd[1865]: 2025-09-16 04:42:25.976 [INFO][5193] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" HandleID="k8s-pod-network.aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--8h62f-eth0" Sep 16 04:42:26.072335 containerd[1865]: 2025-09-16 04:42:25.976 [INFO][5193] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" HandleID="k8s-pod-network.aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--8h62f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3150), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.0.0-n-404d4275b5", "pod":"calico-apiserver-7b46d565d6-8h62f", "timestamp":"2025-09-16 04:42:25.97647748 +0000 UTC"}, Hostname:"ci-4459.0.0-n-404d4275b5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:42:26.072335 containerd[1865]: 2025-09-16 04:42:25.976 [INFO][5193] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:42:26.072335 containerd[1865]: 2025-09-16 04:42:25.976 [INFO][5193] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:42:26.072335 containerd[1865]: 2025-09-16 04:42:25.976 [INFO][5193] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.0.0-n-404d4275b5' Sep 16 04:42:26.072335 containerd[1865]: 2025-09-16 04:42:26.015 [INFO][5193] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:26.072335 containerd[1865]: 2025-09-16 04:42:26.019 [INFO][5193] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:26.072335 containerd[1865]: 2025-09-16 04:42:26.023 [INFO][5193] ipam/ipam.go 511: Trying affinity for 192.168.27.192/26 host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:26.072335 containerd[1865]: 2025-09-16 04:42:26.025 [INFO][5193] ipam/ipam.go 158: Attempting to load block cidr=192.168.27.192/26 host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:26.072335 containerd[1865]: 2025-09-16 04:42:26.027 [INFO][5193] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.27.192/26 host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:26.072335 containerd[1865]: 2025-09-16 04:42:26.027 [INFO][5193] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.27.192/26 handle="k8s-pod-network.aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:26.072335 containerd[1865]: 2025-09-16 04:42:26.028 [INFO][5193] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23 Sep 16 04:42:26.072335 containerd[1865]: 2025-09-16 04:42:26.035 [INFO][5193] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.27.192/26 handle="k8s-pod-network.aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:26.072335 containerd[1865]: 2025-09-16 04:42:26.044 [INFO][5193] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.27.199/26] block=192.168.27.192/26 handle="k8s-pod-network.aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:26.072335 containerd[1865]: 2025-09-16 04:42:26.044 [INFO][5193] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.27.199/26] handle="k8s-pod-network.aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:26.072335 containerd[1865]: 2025-09-16 04:42:26.044 [INFO][5193] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:42:26.072335 containerd[1865]: 2025-09-16 04:42:26.044 [INFO][5193] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.27.199/26] IPv6=[] ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" HandleID="k8s-pod-network.aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--8h62f-eth0" Sep 16 04:42:26.073413 containerd[1865]: 2025-09-16 04:42:26.047 [INFO][5168] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" Namespace="calico-apiserver" Pod="calico-apiserver-7b46d565d6-8h62f" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--8h62f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--8h62f-eth0", GenerateName:"calico-apiserver-7b46d565d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"c870852e-7bb4-4f75-b61b-79ac5929ece7", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 41, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b46d565d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-404d4275b5", ContainerID:"", Pod:"calico-apiserver-7b46d565d6-8h62f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.27.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali45c660ab8c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:42:26.073413 containerd[1865]: 2025-09-16 04:42:26.047 [INFO][5168] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.27.199/32] ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" Namespace="calico-apiserver" Pod="calico-apiserver-7b46d565d6-8h62f" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--8h62f-eth0" Sep 16 04:42:26.073413 containerd[1865]: 2025-09-16 04:42:26.047 [INFO][5168] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali45c660ab8c2 ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" Namespace="calico-apiserver" Pod="calico-apiserver-7b46d565d6-8h62f" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--8h62f-eth0" Sep 16 04:42:26.073413 containerd[1865]: 2025-09-16 04:42:26.051 [INFO][5168] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" Namespace="calico-apiserver" Pod="calico-apiserver-7b46d565d6-8h62f" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--8h62f-eth0" Sep 16 04:42:26.073413 containerd[1865]: 2025-09-16 04:42:26.051 [INFO][5168] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" Namespace="calico-apiserver" Pod="calico-apiserver-7b46d565d6-8h62f" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--8h62f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--8h62f-eth0", GenerateName:"calico-apiserver-7b46d565d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"c870852e-7bb4-4f75-b61b-79ac5929ece7", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 41, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b46d565d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-404d4275b5", ContainerID:"aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23", Pod:"calico-apiserver-7b46d565d6-8h62f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.27.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali45c660ab8c2", MAC:"da:e1:e6:a6:8a:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:42:26.073413 containerd[1865]: 2025-09-16 04:42:26.065 [INFO][5168] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" Namespace="calico-apiserver" Pod="calico-apiserver-7b46d565d6-8h62f" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--8h62f-eth0" Sep 16 04:42:26.082861 containerd[1865]: time="2025-09-16T04:42:26.082708749Z" level=info msg="connecting to shim fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" address="unix:///run/containerd/s/11462bf9c21f6b4c25e39d03ff3a41a2318c72be480b4d84a8cea2dbbefbe9c9" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:42:26.106768 systemd[1]: Started cri-containerd-fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec.scope - libcontainer container fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec. Sep 16 04:42:26.128959 containerd[1865]: time="2025-09-16T04:42:26.128706298Z" level=info msg="connecting to shim aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" address="unix:///run/containerd/s/1725cf0d2fc1a964b24e65fab1dbaeeff27799eb8abfbde61c4230fabfff7c69" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:42:26.149864 systemd[1]: Started cri-containerd-aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23.scope - libcontainer container aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23. Sep 16 04:42:26.158865 containerd[1865]: time="2025-09-16T04:42:26.158833590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b46d565d6-2bmc2,Uid:6fdb1262-eead-4398-8cae-b3e060954c4e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec\"" Sep 16 04:42:26.203527 containerd[1865]: time="2025-09-16T04:42:26.203493060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b46d565d6-8h62f,Uid:c870852e-7bb4-4f75-b61b-79ac5929ece7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23\"" Sep 16 04:42:26.224715 systemd-networkd[1693]: cali07ebd3da009: Gained IPv6LL Sep 16 04:42:26.288716 systemd-networkd[1693]: calib1673ecacd7: Gained IPv6LL Sep 16 04:42:26.443957 containerd[1865]: time="2025-09-16T04:42:26.443594392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:26.469723 containerd[1865]: time="2025-09-16T04:42:26.469659226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 16 04:42:26.474377 containerd[1865]: time="2025-09-16T04:42:26.474328981Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:26.480324 containerd[1865]: time="2025-09-16T04:42:26.480268391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:26.480835 containerd[1865]: time="2025-09-16T04:42:26.480579560Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.361520897s" Sep 16 04:42:26.480835 containerd[1865]: time="2025-09-16T04:42:26.480602673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 16 04:42:26.482150 containerd[1865]: time="2025-09-16T04:42:26.481420169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 16 04:42:26.483682 containerd[1865]: time="2025-09-16T04:42:26.483655036Z" level=info msg="CreateContainer within sandbox \"37a481d6c7fbdd02af5755e828815d90a2d49bd2a5d5669e92601c6da8b8b5d2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 16 04:42:26.506497 containerd[1865]: time="2025-09-16T04:42:26.506465253Z" level=info msg="Container 1e0f626dfdc2361f0a44ec5fa6d2868d149fe8f56a0f1655bf2ff0b1d0a0b461: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:42:26.527244 containerd[1865]: time="2025-09-16T04:42:26.527206489Z" level=info msg="CreateContainer within sandbox \"37a481d6c7fbdd02af5755e828815d90a2d49bd2a5d5669e92601c6da8b8b5d2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1e0f626dfdc2361f0a44ec5fa6d2868d149fe8f56a0f1655bf2ff0b1d0a0b461\"" Sep 16 04:42:26.527695 containerd[1865]: time="2025-09-16T04:42:26.527670951Z" level=info msg="StartContainer for \"1e0f626dfdc2361f0a44ec5fa6d2868d149fe8f56a0f1655bf2ff0b1d0a0b461\"" Sep 16 04:42:26.528951 containerd[1865]: time="2025-09-16T04:42:26.528917084Z" level=info msg="connecting to shim 1e0f626dfdc2361f0a44ec5fa6d2868d149fe8f56a0f1655bf2ff0b1d0a0b461" address="unix:///run/containerd/s/9b0acbd702e74908cbb770d5181ad14ea43ef85703c74eacb0fc610e1f4acb93" protocol=ttrpc version=3 Sep 16 04:42:26.547031 systemd[1]: Started cri-containerd-1e0f626dfdc2361f0a44ec5fa6d2868d149fe8f56a0f1655bf2ff0b1d0a0b461.scope - libcontainer container 1e0f626dfdc2361f0a44ec5fa6d2868d149fe8f56a0f1655bf2ff0b1d0a0b461. Sep 16 04:42:26.578937 containerd[1865]: time="2025-09-16T04:42:26.578898689Z" level=info msg="StartContainer for \"1e0f626dfdc2361f0a44ec5fa6d2868d149fe8f56a0f1655bf2ff0b1d0a0b461\" returns successfully" Sep 16 04:42:26.825099 containerd[1865]: time="2025-09-16T04:42:26.825024304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bv4px,Uid:e8ebb8cf-d133-46d0-bb77-906589d0067d,Namespace:kube-system,Attempt:0,}" Sep 16 04:42:26.825873 containerd[1865]: time="2025-09-16T04:42:26.825818112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b6466f845-jd2sh,Uid:50f30276-5c1b-4f47-90a5-73b141a40f1c,Namespace:calico-system,Attempt:0,}" Sep 16 04:42:26.865161 systemd-networkd[1693]: calie970514e39e: Gained IPv6LL Sep 16 04:42:26.951011 systemd-networkd[1693]: cali4293e655505: Link UP Sep 16 04:42:26.951650 systemd-networkd[1693]: cali4293e655505: Gained carrier Sep 16 04:42:26.974917 containerd[1865]: 2025-09-16 04:42:26.880 [INFO][5343] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.0.0--n--404d4275b5-k8s-coredns--668d6bf9bc--bv4px-eth0 coredns-668d6bf9bc- kube-system e8ebb8cf-d133-46d0-bb77-906589d0067d 800 0 2025-09-16 04:41:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.0.0-n-404d4275b5 coredns-668d6bf9bc-bv4px eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4293e655505 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="355edccd5ca32d2da6900a67ee7e0ecba367d5c8d797f816aa1496bd5f78d5d6" Namespace="kube-system" Pod="coredns-668d6bf9bc-bv4px" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-coredns--668d6bf9bc--bv4px-" Sep 16 04:42:26.974917 containerd[1865]: 2025-09-16 04:42:26.880 [INFO][5343] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="355edccd5ca32d2da6900a67ee7e0ecba367d5c8d797f816aa1496bd5f78d5d6" Namespace="kube-system" Pod="coredns-668d6bf9bc-bv4px" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-coredns--668d6bf9bc--bv4px-eth0" Sep 16 04:42:26.974917 containerd[1865]: 2025-09-16 04:42:26.906 [INFO][5367] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="355edccd5ca32d2da6900a67ee7e0ecba367d5c8d797f816aa1496bd5f78d5d6" HandleID="k8s-pod-network.355edccd5ca32d2da6900a67ee7e0ecba367d5c8d797f816aa1496bd5f78d5d6" Workload="ci--4459.0.0--n--404d4275b5-k8s-coredns--668d6bf9bc--bv4px-eth0" Sep 16 04:42:26.974917 containerd[1865]: 2025-09-16 04:42:26.906 [INFO][5367] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="355edccd5ca32d2da6900a67ee7e0ecba367d5c8d797f816aa1496bd5f78d5d6" HandleID="k8s-pod-network.355edccd5ca32d2da6900a67ee7e0ecba367d5c8d797f816aa1496bd5f78d5d6" Workload="ci--4459.0.0--n--404d4275b5-k8s-coredns--668d6bf9bc--bv4px-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3950), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.0.0-n-404d4275b5", "pod":"coredns-668d6bf9bc-bv4px", "timestamp":"2025-09-16 04:42:26.906398423 +0000 UTC"}, Hostname:"ci-4459.0.0-n-404d4275b5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:42:26.974917 containerd[1865]: 2025-09-16 04:42:26.906 [INFO][5367] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:42:26.974917 containerd[1865]: 2025-09-16 04:42:26.906 [INFO][5367] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:42:26.974917 containerd[1865]: 2025-09-16 04:42:26.906 [INFO][5367] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.0.0-n-404d4275b5' Sep 16 04:42:26.974917 containerd[1865]: 2025-09-16 04:42:26.913 [INFO][5367] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.355edccd5ca32d2da6900a67ee7e0ecba367d5c8d797f816aa1496bd5f78d5d6" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:26.974917 containerd[1865]: 2025-09-16 04:42:26.917 [INFO][5367] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:26.974917 containerd[1865]: 2025-09-16 04:42:26.921 [INFO][5367] ipam/ipam.go 511: Trying affinity for 192.168.27.192/26 host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:26.974917 containerd[1865]: 2025-09-16 04:42:26.922 [INFO][5367] ipam/ipam.go 158: Attempting to load block cidr=192.168.27.192/26 host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:26.974917 containerd[1865]: 2025-09-16 04:42:26.924 [INFO][5367] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.27.192/26 host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:26.974917 containerd[1865]: 2025-09-16 04:42:26.924 [INFO][5367] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.27.192/26 handle="k8s-pod-network.355edccd5ca32d2da6900a67ee7e0ecba367d5c8d797f816aa1496bd5f78d5d6" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:26.974917 containerd[1865]: 2025-09-16 04:42:26.925 [INFO][5367] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.355edccd5ca32d2da6900a67ee7e0ecba367d5c8d797f816aa1496bd5f78d5d6 Sep 16 04:42:26.974917 containerd[1865]: 2025-09-16 04:42:26.930 [INFO][5367] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.27.192/26 handle="k8s-pod-network.355edccd5ca32d2da6900a67ee7e0ecba367d5c8d797f816aa1496bd5f78d5d6" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:26.974917 containerd[1865]: 2025-09-16 04:42:26.943 [INFO][5367] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.27.200/26] block=192.168.27.192/26 handle="k8s-pod-network.355edccd5ca32d2da6900a67ee7e0ecba367d5c8d797f816aa1496bd5f78d5d6" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:26.974917 containerd[1865]: 2025-09-16 04:42:26.943 [INFO][5367] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.27.200/26] handle="k8s-pod-network.355edccd5ca32d2da6900a67ee7e0ecba367d5c8d797f816aa1496bd5f78d5d6" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:26.974917 containerd[1865]: 2025-09-16 04:42:26.945 [INFO][5367] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:42:26.974917 containerd[1865]: 2025-09-16 04:42:26.945 [INFO][5367] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.27.200/26] IPv6=[] ContainerID="355edccd5ca32d2da6900a67ee7e0ecba367d5c8d797f816aa1496bd5f78d5d6" HandleID="k8s-pod-network.355edccd5ca32d2da6900a67ee7e0ecba367d5c8d797f816aa1496bd5f78d5d6" Workload="ci--4459.0.0--n--404d4275b5-k8s-coredns--668d6bf9bc--bv4px-eth0" Sep 16 04:42:26.976926 containerd[1865]: 2025-09-16 04:42:26.947 [INFO][5343] cni-plugin/k8s.go 418: Populated endpoint ContainerID="355edccd5ca32d2da6900a67ee7e0ecba367d5c8d797f816aa1496bd5f78d5d6" Namespace="kube-system" Pod="coredns-668d6bf9bc-bv4px" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-coredns--668d6bf9bc--bv4px-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--404d4275b5-k8s-coredns--668d6bf9bc--bv4px-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e8ebb8cf-d133-46d0-bb77-906589d0067d", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 41, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-404d4275b5", ContainerID:"", Pod:"coredns-668d6bf9bc-bv4px", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.27.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4293e655505", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:42:26.976926 containerd[1865]: 2025-09-16 04:42:26.947 [INFO][5343] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.27.200/32] ContainerID="355edccd5ca32d2da6900a67ee7e0ecba367d5c8d797f816aa1496bd5f78d5d6" Namespace="kube-system" Pod="coredns-668d6bf9bc-bv4px" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-coredns--668d6bf9bc--bv4px-eth0" Sep 16 04:42:26.976926 containerd[1865]: 2025-09-16 04:42:26.947 [INFO][5343] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4293e655505 ContainerID="355edccd5ca32d2da6900a67ee7e0ecba367d5c8d797f816aa1496bd5f78d5d6" Namespace="kube-system" Pod="coredns-668d6bf9bc-bv4px" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-coredns--668d6bf9bc--bv4px-eth0" Sep 16 04:42:26.976926 containerd[1865]: 2025-09-16 04:42:26.952 [INFO][5343] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="355edccd5ca32d2da6900a67ee7e0ecba367d5c8d797f816aa1496bd5f78d5d6" Namespace="kube-system" Pod="coredns-668d6bf9bc-bv4px" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-coredns--668d6bf9bc--bv4px-eth0" Sep 16 04:42:26.976926 containerd[1865]: 2025-09-16 04:42:26.955 [INFO][5343] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="355edccd5ca32d2da6900a67ee7e0ecba367d5c8d797f816aa1496bd5f78d5d6" Namespace="kube-system" Pod="coredns-668d6bf9bc-bv4px" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-coredns--668d6bf9bc--bv4px-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--404d4275b5-k8s-coredns--668d6bf9bc--bv4px-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e8ebb8cf-d133-46d0-bb77-906589d0067d", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 41, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-404d4275b5", ContainerID:"355edccd5ca32d2da6900a67ee7e0ecba367d5c8d797f816aa1496bd5f78d5d6", Pod:"coredns-668d6bf9bc-bv4px", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.27.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4293e655505", MAC:"e6:dd:dd:27:6d:4f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:42:26.976926 containerd[1865]: 2025-09-16 04:42:26.970 [INFO][5343] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="355edccd5ca32d2da6900a67ee7e0ecba367d5c8d797f816aa1496bd5f78d5d6" Namespace="kube-system" Pod="coredns-668d6bf9bc-bv4px" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-coredns--668d6bf9bc--bv4px-eth0" Sep 16 04:42:27.248792 systemd-networkd[1693]: cali45c660ab8c2: Gained IPv6LL Sep 16 04:42:27.334458 systemd-networkd[1693]: calie445abcbf54: Link UP Sep 16 04:42:27.335172 systemd-networkd[1693]: calie445abcbf54: Gained carrier Sep 16 04:42:27.353627 containerd[1865]: 2025-09-16 04:42:26.891 [INFO][5356] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.0.0--n--404d4275b5-k8s-calico--kube--controllers--6b6466f845--jd2sh-eth0 calico-kube-controllers-6b6466f845- calico-system 50f30276-5c1b-4f47-90a5-73b141a40f1c 803 0 2025-09-16 04:42:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6b6466f845 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.0.0-n-404d4275b5 calico-kube-controllers-6b6466f845-jd2sh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie445abcbf54 [] [] }} ContainerID="281c130e1bd1134290218a6212b2164613b4769255157d6a38d41c0f86331934" Namespace="calico-system" Pod="calico-kube-controllers-6b6466f845-jd2sh" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--kube--controllers--6b6466f845--jd2sh-" Sep 16 04:42:27.353627 containerd[1865]: 2025-09-16 04:42:26.891 [INFO][5356] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="281c130e1bd1134290218a6212b2164613b4769255157d6a38d41c0f86331934" Namespace="calico-system" Pod="calico-kube-controllers-6b6466f845-jd2sh" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--kube--controllers--6b6466f845--jd2sh-eth0" Sep 16 04:42:27.353627 containerd[1865]: 2025-09-16 04:42:26.974 [INFO][5374] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="281c130e1bd1134290218a6212b2164613b4769255157d6a38d41c0f86331934" HandleID="k8s-pod-network.281c130e1bd1134290218a6212b2164613b4769255157d6a38d41c0f86331934" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--kube--controllers--6b6466f845--jd2sh-eth0" Sep 16 04:42:27.353627 containerd[1865]: 2025-09-16 04:42:26.974 [INFO][5374] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="281c130e1bd1134290218a6212b2164613b4769255157d6a38d41c0f86331934" HandleID="k8s-pod-network.281c130e1bd1134290218a6212b2164613b4769255157d6a38d41c0f86331934" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--kube--controllers--6b6466f845--jd2sh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024aff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.0.0-n-404d4275b5", "pod":"calico-kube-controllers-6b6466f845-jd2sh", "timestamp":"2025-09-16 04:42:26.974142734 +0000 UTC"}, Hostname:"ci-4459.0.0-n-404d4275b5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:42:27.353627 containerd[1865]: 2025-09-16 04:42:26.974 [INFO][5374] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:42:27.353627 containerd[1865]: 2025-09-16 04:42:26.974 [INFO][5374] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:42:27.353627 containerd[1865]: 2025-09-16 04:42:26.974 [INFO][5374] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.0.0-n-404d4275b5' Sep 16 04:42:27.353627 containerd[1865]: 2025-09-16 04:42:27.014 [INFO][5374] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.281c130e1bd1134290218a6212b2164613b4769255157d6a38d41c0f86331934" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:27.353627 containerd[1865]: 2025-09-16 04:42:27.018 [INFO][5374] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:27.353627 containerd[1865]: 2025-09-16 04:42:27.022 [INFO][5374] ipam/ipam.go 511: Trying affinity for 192.168.27.192/26 host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:27.353627 containerd[1865]: 2025-09-16 04:42:27.023 [INFO][5374] ipam/ipam.go 158: Attempting to load block cidr=192.168.27.192/26 host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:27.353627 containerd[1865]: 2025-09-16 04:42:27.033 [INFO][5374] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.27.192/26 host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:27.353627 containerd[1865]: 2025-09-16 04:42:27.138 [INFO][5374] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.27.192/26 handle="k8s-pod-network.281c130e1bd1134290218a6212b2164613b4769255157d6a38d41c0f86331934" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:27.353627 containerd[1865]: 2025-09-16 04:42:27.142 [INFO][5374] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.281c130e1bd1134290218a6212b2164613b4769255157d6a38d41c0f86331934 Sep 16 04:42:27.353627 containerd[1865]: 2025-09-16 04:42:27.151 [INFO][5374] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.27.192/26 handle="k8s-pod-network.281c130e1bd1134290218a6212b2164613b4769255157d6a38d41c0f86331934" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:27.353627 containerd[1865]: 2025-09-16 04:42:27.212 [INFO][5374] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.27.201/26] block=192.168.27.192/26 handle="k8s-pod-network.281c130e1bd1134290218a6212b2164613b4769255157d6a38d41c0f86331934" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:27.353627 containerd[1865]: 2025-09-16 04:42:27.328 [INFO][5374] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.27.201/26] handle="k8s-pod-network.281c130e1bd1134290218a6212b2164613b4769255157d6a38d41c0f86331934" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:42:27.353627 containerd[1865]: 2025-09-16 04:42:27.328 [INFO][5374] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:42:27.353627 containerd[1865]: 2025-09-16 04:42:27.328 [INFO][5374] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.27.201/26] IPv6=[] ContainerID="281c130e1bd1134290218a6212b2164613b4769255157d6a38d41c0f86331934" HandleID="k8s-pod-network.281c130e1bd1134290218a6212b2164613b4769255157d6a38d41c0f86331934" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--kube--controllers--6b6466f845--jd2sh-eth0" Sep 16 04:42:27.354061 containerd[1865]: 2025-09-16 04:42:27.331 [INFO][5356] cni-plugin/k8s.go 418: Populated endpoint ContainerID="281c130e1bd1134290218a6212b2164613b4769255157d6a38d41c0f86331934" Namespace="calico-system" Pod="calico-kube-controllers-6b6466f845-jd2sh" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--kube--controllers--6b6466f845--jd2sh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--404d4275b5-k8s-calico--kube--controllers--6b6466f845--jd2sh-eth0", GenerateName:"calico-kube-controllers-6b6466f845-", Namespace:"calico-system", SelfLink:"", UID:"50f30276-5c1b-4f47-90a5-73b141a40f1c", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 42, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b6466f845", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-404d4275b5", ContainerID:"", Pod:"calico-kube-controllers-6b6466f845-jd2sh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.27.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie445abcbf54", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:42:27.354061 containerd[1865]: 2025-09-16 04:42:27.331 [INFO][5356] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.27.201/32] ContainerID="281c130e1bd1134290218a6212b2164613b4769255157d6a38d41c0f86331934" Namespace="calico-system" Pod="calico-kube-controllers-6b6466f845-jd2sh" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--kube--controllers--6b6466f845--jd2sh-eth0" Sep 16 04:42:27.354061 containerd[1865]: 2025-09-16 04:42:27.331 [INFO][5356] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie445abcbf54 ContainerID="281c130e1bd1134290218a6212b2164613b4769255157d6a38d41c0f86331934" Namespace="calico-system" Pod="calico-kube-controllers-6b6466f845-jd2sh" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--kube--controllers--6b6466f845--jd2sh-eth0" Sep 16 04:42:27.354061 containerd[1865]: 2025-09-16 04:42:27.335 [INFO][5356] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="281c130e1bd1134290218a6212b2164613b4769255157d6a38d41c0f86331934" Namespace="calico-system" Pod="calico-kube-controllers-6b6466f845-jd2sh" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--kube--controllers--6b6466f845--jd2sh-eth0" Sep 16 04:42:27.354061 containerd[1865]: 2025-09-16 04:42:27.335 [INFO][5356] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="281c130e1bd1134290218a6212b2164613b4769255157d6a38d41c0f86331934" Namespace="calico-system" Pod="calico-kube-controllers-6b6466f845-jd2sh" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--kube--controllers--6b6466f845--jd2sh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--404d4275b5-k8s-calico--kube--controllers--6b6466f845--jd2sh-eth0", GenerateName:"calico-kube-controllers-6b6466f845-", Namespace:"calico-system", SelfLink:"", UID:"50f30276-5c1b-4f47-90a5-73b141a40f1c", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 42, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b6466f845", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-404d4275b5", ContainerID:"281c130e1bd1134290218a6212b2164613b4769255157d6a38d41c0f86331934", Pod:"calico-kube-controllers-6b6466f845-jd2sh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.27.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie445abcbf54", MAC:"3a:7f:83:70:cb:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:42:27.354061 containerd[1865]: 2025-09-16 04:42:27.347 [INFO][5356] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="281c130e1bd1134290218a6212b2164613b4769255157d6a38d41c0f86331934" Namespace="calico-system" Pod="calico-kube-controllers-6b6466f845-jd2sh" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--kube--controllers--6b6466f845--jd2sh-eth0" Sep 16 04:42:27.412526 containerd[1865]: time="2025-09-16T04:42:27.412483466Z" level=info msg="connecting to shim 355edccd5ca32d2da6900a67ee7e0ecba367d5c8d797f816aa1496bd5f78d5d6" address="unix:///run/containerd/s/36f819d8c5098ca88e9f978c76215d88b5cec0c33bb9f72abce9ef35c465448b" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:42:27.440899 containerd[1865]: time="2025-09-16T04:42:27.440825721Z" level=info msg="connecting to shim 281c130e1bd1134290218a6212b2164613b4769255157d6a38d41c0f86331934" address="unix:///run/containerd/s/73a2c1b798cdda8c75de6024cff132a6987084dc969a5c014f777288e47b378f" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:42:27.443798 systemd[1]: Started cri-containerd-355edccd5ca32d2da6900a67ee7e0ecba367d5c8d797f816aa1496bd5f78d5d6.scope - libcontainer container 355edccd5ca32d2da6900a67ee7e0ecba367d5c8d797f816aa1496bd5f78d5d6. Sep 16 04:42:27.459735 systemd[1]: Started cri-containerd-281c130e1bd1134290218a6212b2164613b4769255157d6a38d41c0f86331934.scope - libcontainer container 281c130e1bd1134290218a6212b2164613b4769255157d6a38d41c0f86331934. Sep 16 04:42:27.493981 containerd[1865]: time="2025-09-16T04:42:27.493949595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bv4px,Uid:e8ebb8cf-d133-46d0-bb77-906589d0067d,Namespace:kube-system,Attempt:0,} returns sandbox id \"355edccd5ca32d2da6900a67ee7e0ecba367d5c8d797f816aa1496bd5f78d5d6\"" Sep 16 04:42:27.497173 containerd[1865]: time="2025-09-16T04:42:27.497069497Z" level=info msg="CreateContainer within sandbox \"355edccd5ca32d2da6900a67ee7e0ecba367d5c8d797f816aa1496bd5f78d5d6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 04:42:27.512033 containerd[1865]: time="2025-09-16T04:42:27.511947765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b6466f845-jd2sh,Uid:50f30276-5c1b-4f47-90a5-73b141a40f1c,Namespace:calico-system,Attempt:0,} returns sandbox id \"281c130e1bd1134290218a6212b2164613b4769255157d6a38d41c0f86331934\"" Sep 16 04:42:27.550499 containerd[1865]: time="2025-09-16T04:42:27.550396241Z" level=info msg="Container 49d6fc34e45d0f43d08981d258979d2f8e7cec2c513eca01a1f5a49e3d727937: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:42:27.571490 containerd[1865]: time="2025-09-16T04:42:27.571096420Z" level=info msg="CreateContainer within sandbox \"355edccd5ca32d2da6900a67ee7e0ecba367d5c8d797f816aa1496bd5f78d5d6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"49d6fc34e45d0f43d08981d258979d2f8e7cec2c513eca01a1f5a49e3d727937\"" Sep 16 04:42:27.572524 containerd[1865]: time="2025-09-16T04:42:27.572493781Z" level=info msg="StartContainer for \"49d6fc34e45d0f43d08981d258979d2f8e7cec2c513eca01a1f5a49e3d727937\"" Sep 16 04:42:27.574029 containerd[1865]: time="2025-09-16T04:42:27.574003955Z" level=info msg="connecting to shim 49d6fc34e45d0f43d08981d258979d2f8e7cec2c513eca01a1f5a49e3d727937" address="unix:///run/containerd/s/36f819d8c5098ca88e9f978c76215d88b5cec0c33bb9f72abce9ef35c465448b" protocol=ttrpc version=3 Sep 16 04:42:27.595773 systemd[1]: Started cri-containerd-49d6fc34e45d0f43d08981d258979d2f8e7cec2c513eca01a1f5a49e3d727937.scope - libcontainer container 49d6fc34e45d0f43d08981d258979d2f8e7cec2c513eca01a1f5a49e3d727937. Sep 16 04:42:27.627326 containerd[1865]: time="2025-09-16T04:42:27.627168750Z" level=info msg="StartContainer for \"49d6fc34e45d0f43d08981d258979d2f8e7cec2c513eca01a1f5a49e3d727937\" returns successfully" Sep 16 04:42:27.632723 systemd-networkd[1693]: calif300e3586e8: Gained IPv6LL Sep 16 04:42:28.132447 kubelet[3280]: I0916 04:42:28.131872 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bv4px" podStartSLOduration=38.13185136 podStartE2EDuration="38.13185136s" podCreationTimestamp="2025-09-16 04:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:42:28.114117423 +0000 UTC m=+43.358302895" watchObservedRunningTime="2025-09-16 04:42:28.13185136 +0000 UTC m=+43.376036832" Sep 16 04:42:28.337524 systemd-networkd[1693]: cali4293e655505: Gained IPv6LL Sep 16 04:42:28.528959 systemd-networkd[1693]: calie445abcbf54: Gained IPv6LL Sep 16 04:42:29.490771 containerd[1865]: time="2025-09-16T04:42:29.490659449Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:29.494951 containerd[1865]: time="2025-09-16T04:42:29.494923160Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 16 04:42:29.499160 containerd[1865]: time="2025-09-16T04:42:29.499105701Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:29.504091 containerd[1865]: time="2025-09-16T04:42:29.504056161Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:29.504704 containerd[1865]: time="2025-09-16T04:42:29.504649483Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 3.023202753s" Sep 16 04:42:29.504704 containerd[1865]: time="2025-09-16T04:42:29.504678876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 16 04:42:29.506343 containerd[1865]: time="2025-09-16T04:42:29.506250763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 16 04:42:29.507767 containerd[1865]: time="2025-09-16T04:42:29.507747727Z" level=info msg="CreateContainer within sandbox \"466fdc08fb62e9e9cc8a14a51adc81934dfdba1f7f8548f3c403c111142b765f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 16 04:42:29.532382 containerd[1865]: time="2025-09-16T04:42:29.532205578Z" level=info msg="Container 8c2fe5cb537a8764920952b290a35224d20d118bc08592c3bb210c93251df17b: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:42:29.551214 containerd[1865]: time="2025-09-16T04:42:29.551174944Z" level=info msg="CreateContainer within sandbox \"466fdc08fb62e9e9cc8a14a51adc81934dfdba1f7f8548f3c403c111142b765f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8c2fe5cb537a8764920952b290a35224d20d118bc08592c3bb210c93251df17b\"" Sep 16 04:42:29.552584 containerd[1865]: time="2025-09-16T04:42:29.552121685Z" level=info msg="StartContainer for \"8c2fe5cb537a8764920952b290a35224d20d118bc08592c3bb210c93251df17b\"" Sep 16 04:42:29.553429 containerd[1865]: time="2025-09-16T04:42:29.553378682Z" level=info msg="connecting to shim 8c2fe5cb537a8764920952b290a35224d20d118bc08592c3bb210c93251df17b" address="unix:///run/containerd/s/6b2276c23d9489feca0529db138a852f14b37d892d28ade64c7907b003b6f809" protocol=ttrpc version=3 Sep 16 04:42:29.578742 systemd[1]: Started cri-containerd-8c2fe5cb537a8764920952b290a35224d20d118bc08592c3bb210c93251df17b.scope - libcontainer container 8c2fe5cb537a8764920952b290a35224d20d118bc08592c3bb210c93251df17b. Sep 16 04:42:29.629065 containerd[1865]: time="2025-09-16T04:42:29.629028134Z" level=info msg="StartContainer for \"8c2fe5cb537a8764920952b290a35224d20d118bc08592c3bb210c93251df17b\" returns successfully" Sep 16 04:42:30.113404 kubelet[3280]: I0916 04:42:30.113099 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-68cdb9d4d4-g2x98" podStartSLOduration=25.824284585 podStartE2EDuration="30.113087s" podCreationTimestamp="2025-09-16 04:42:00 +0000 UTC" firstStartedPulling="2025-09-16 04:42:25.217366241 +0000 UTC m=+40.461551713" lastFinishedPulling="2025-09-16 04:42:29.506168648 +0000 UTC m=+44.750354128" observedRunningTime="2025-09-16 04:42:30.112701084 +0000 UTC m=+45.356886564" watchObservedRunningTime="2025-09-16 04:42:30.113087 +0000 UTC m=+45.357272480" Sep 16 04:42:31.108794 kubelet[3280]: I0916 04:42:31.108017 3280 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:42:31.570238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount147719255.mount: Deactivated successfully. Sep 16 04:42:32.197630 containerd[1865]: time="2025-09-16T04:42:32.197527114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:32.203697 containerd[1865]: time="2025-09-16T04:42:32.203655849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 16 04:42:32.208012 containerd[1865]: time="2025-09-16T04:42:32.207957985Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:32.212919 containerd[1865]: time="2025-09-16T04:42:32.212871364Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:32.214005 containerd[1865]: time="2025-09-16T04:42:32.213893810Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.707308718s" Sep 16 04:42:32.214005 containerd[1865]: time="2025-09-16T04:42:32.213926147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 16 04:42:32.215263 containerd[1865]: time="2025-09-16T04:42:32.215234794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 16 04:42:32.216025 containerd[1865]: time="2025-09-16T04:42:32.215861717Z" level=info msg="CreateContainer within sandbox \"19446881c97ad4f0b5c6e993541fa3705360f8f32b1e750341d250df60fa0a1b\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 16 04:42:32.243894 containerd[1865]: time="2025-09-16T04:42:32.243856193Z" level=info msg="Container d1f08e9c24789e25c7bb9b1a3c69d1fa9201d4793e1c9de4574b81a0f834b0b1: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:42:32.263111 containerd[1865]: time="2025-09-16T04:42:32.263060871Z" level=info msg="CreateContainer within sandbox \"19446881c97ad4f0b5c6e993541fa3705360f8f32b1e750341d250df60fa0a1b\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"d1f08e9c24789e25c7bb9b1a3c69d1fa9201d4793e1c9de4574b81a0f834b0b1\"" Sep 16 04:42:32.264092 containerd[1865]: time="2025-09-16T04:42:32.264052268Z" level=info msg="StartContainer for \"d1f08e9c24789e25c7bb9b1a3c69d1fa9201d4793e1c9de4574b81a0f834b0b1\"" Sep 16 04:42:32.265960 containerd[1865]: time="2025-09-16T04:42:32.265933421Z" level=info msg="connecting to shim d1f08e9c24789e25c7bb9b1a3c69d1fa9201d4793e1c9de4574b81a0f834b0b1" address="unix:///run/containerd/s/c70d0530a706cd07c397163baeb1d8522126bc97a1be10774a978727a91d52da" protocol=ttrpc version=3 Sep 16 04:42:32.310976 systemd[1]: Started cri-containerd-d1f08e9c24789e25c7bb9b1a3c69d1fa9201d4793e1c9de4574b81a0f834b0b1.scope - libcontainer container d1f08e9c24789e25c7bb9b1a3c69d1fa9201d4793e1c9de4574b81a0f834b0b1. Sep 16 04:42:32.388962 containerd[1865]: time="2025-09-16T04:42:32.388761265Z" level=info msg="StartContainer for \"d1f08e9c24789e25c7bb9b1a3c69d1fa9201d4793e1c9de4574b81a0f834b0b1\" returns successfully" Sep 16 04:42:32.527264 containerd[1865]: time="2025-09-16T04:42:32.527213009Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:32.531638 containerd[1865]: time="2025-09-16T04:42:32.531446839Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 16 04:42:32.533581 containerd[1865]: time="2025-09-16T04:42:32.533549670Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 317.904591ms" Sep 16 04:42:32.533794 containerd[1865]: time="2025-09-16T04:42:32.533697770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 16 04:42:32.537073 containerd[1865]: time="2025-09-16T04:42:32.536795623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 16 04:42:32.537577 containerd[1865]: time="2025-09-16T04:42:32.537555237Z" level=info msg="CreateContainer within sandbox \"fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 16 04:42:32.562975 containerd[1865]: time="2025-09-16T04:42:32.562935123Z" level=info msg="Container db2a160228cf3fedd6a61790c07f79254c62e63ac7788214a699f74c90af05e9: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:42:32.588554 containerd[1865]: time="2025-09-16T04:42:32.588517016Z" level=info msg="CreateContainer within sandbox \"fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"db2a160228cf3fedd6a61790c07f79254c62e63ac7788214a699f74c90af05e9\"" Sep 16 04:42:32.590449 containerd[1865]: time="2025-09-16T04:42:32.590405616Z" level=info msg="StartContainer for \"db2a160228cf3fedd6a61790c07f79254c62e63ac7788214a699f74c90af05e9\"" Sep 16 04:42:32.592502 containerd[1865]: time="2025-09-16T04:42:32.592445605Z" level=info msg="connecting to shim db2a160228cf3fedd6a61790c07f79254c62e63ac7788214a699f74c90af05e9" address="unix:///run/containerd/s/11462bf9c21f6b4c25e39d03ff3a41a2318c72be480b4d84a8cea2dbbefbe9c9" protocol=ttrpc version=3 Sep 16 04:42:32.615870 systemd[1]: Started cri-containerd-db2a160228cf3fedd6a61790c07f79254c62e63ac7788214a699f74c90af05e9.scope - libcontainer container db2a160228cf3fedd6a61790c07f79254c62e63ac7788214a699f74c90af05e9. Sep 16 04:42:32.675234 containerd[1865]: time="2025-09-16T04:42:32.675175996Z" level=info msg="StartContainer for \"db2a160228cf3fedd6a61790c07f79254c62e63ac7788214a699f74c90af05e9\" returns successfully" Sep 16 04:42:32.847392 containerd[1865]: time="2025-09-16T04:42:32.846478648Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:32.849705 containerd[1865]: time="2025-09-16T04:42:32.849682232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 16 04:42:32.850897 containerd[1865]: time="2025-09-16T04:42:32.850867803Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 314.049108ms" Sep 16 04:42:32.851004 containerd[1865]: time="2025-09-16T04:42:32.850991679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 16 04:42:32.851892 containerd[1865]: time="2025-09-16T04:42:32.851867697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 16 04:42:32.852909 containerd[1865]: time="2025-09-16T04:42:32.852882840Z" level=info msg="CreateContainer within sandbox \"aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 16 04:42:32.878146 containerd[1865]: time="2025-09-16T04:42:32.878100881Z" level=info msg="Container 4be50cac89754f3ba5abacda604e5af865880ddc7c14db2fe297515c9da32a0d: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:42:32.903345 containerd[1865]: time="2025-09-16T04:42:32.903301433Z" level=info msg="CreateContainer within sandbox \"aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4be50cac89754f3ba5abacda604e5af865880ddc7c14db2fe297515c9da32a0d\"" Sep 16 04:42:32.904053 containerd[1865]: time="2025-09-16T04:42:32.904029887Z" level=info msg="StartContainer for \"4be50cac89754f3ba5abacda604e5af865880ddc7c14db2fe297515c9da32a0d\"" Sep 16 04:42:32.906201 containerd[1865]: time="2025-09-16T04:42:32.906025939Z" level=info msg="connecting to shim 4be50cac89754f3ba5abacda604e5af865880ddc7c14db2fe297515c9da32a0d" address="unix:///run/containerd/s/1725cf0d2fc1a964b24e65fab1dbaeeff27799eb8abfbde61c4230fabfff7c69" protocol=ttrpc version=3 Sep 16 04:42:32.924782 systemd[1]: Started cri-containerd-4be50cac89754f3ba5abacda604e5af865880ddc7c14db2fe297515c9da32a0d.scope - libcontainer container 4be50cac89754f3ba5abacda604e5af865880ddc7c14db2fe297515c9da32a0d. Sep 16 04:42:32.987120 containerd[1865]: time="2025-09-16T04:42:32.986906491Z" level=info msg="StartContainer for \"4be50cac89754f3ba5abacda604e5af865880ddc7c14db2fe297515c9da32a0d\" returns successfully" Sep 16 04:42:33.141838 kubelet[3280]: I0916 04:42:33.140570 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b46d565d6-8h62f" podStartSLOduration=27.49360803 podStartE2EDuration="34.140555432s" podCreationTimestamp="2025-09-16 04:41:59 +0000 UTC" firstStartedPulling="2025-09-16 04:42:26.204751362 +0000 UTC m=+41.448936834" lastFinishedPulling="2025-09-16 04:42:32.851698764 +0000 UTC m=+48.095884236" observedRunningTime="2025-09-16 04:42:33.137105129 +0000 UTC m=+48.381290617" watchObservedRunningTime="2025-09-16 04:42:33.140555432 +0000 UTC m=+48.384740904" Sep 16 04:42:33.159402 kubelet[3280]: I0916 04:42:33.158947 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b46d565d6-2bmc2" podStartSLOduration=27.785728267 podStartE2EDuration="34.158934229s" podCreationTimestamp="2025-09-16 04:41:59 +0000 UTC" firstStartedPulling="2025-09-16 04:42:26.1612912 +0000 UTC m=+41.405476672" lastFinishedPulling="2025-09-16 04:42:32.534497162 +0000 UTC m=+47.778682634" observedRunningTime="2025-09-16 04:42:33.157670679 +0000 UTC m=+48.401856167" watchObservedRunningTime="2025-09-16 04:42:33.158934229 +0000 UTC m=+48.403119701" Sep 16 04:42:33.174768 kubelet[3280]: I0916 04:42:33.174719 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-xmq74" podStartSLOduration=24.264627279 podStartE2EDuration="31.174703948s" podCreationTimestamp="2025-09-16 04:42:02 +0000 UTC" firstStartedPulling="2025-09-16 04:42:25.304542403 +0000 UTC m=+40.548727875" lastFinishedPulling="2025-09-16 04:42:32.214619072 +0000 UTC m=+47.458804544" observedRunningTime="2025-09-16 04:42:33.173574146 +0000 UTC m=+48.417759650" watchObservedRunningTime="2025-09-16 04:42:33.174703948 +0000 UTC m=+48.418889420" Sep 16 04:42:33.346072 containerd[1865]: time="2025-09-16T04:42:33.346027537Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1f08e9c24789e25c7bb9b1a3c69d1fa9201d4793e1c9de4574b81a0f834b0b1\" id:\"4b7f9d8f14ddff6543cfb64d3182676243e4e29870b7088bb7c9f92428299dd5\" pid:5710 exit_status:1 exited_at:{seconds:1757997753 nanos:339500910}" Sep 16 04:42:34.253416 containerd[1865]: time="2025-09-16T04:42:34.253381103Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1f08e9c24789e25c7bb9b1a3c69d1fa9201d4793e1c9de4574b81a0f834b0b1\" id:\"77f51437276d7b8460858859355fcf49c7befaefb886e1660d93ddbf520f72d0\" pid:5734 exit_status:1 exited_at:{seconds:1757997754 nanos:252928601}" Sep 16 04:42:34.471434 containerd[1865]: time="2025-09-16T04:42:34.470988057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:34.476755 containerd[1865]: time="2025-09-16T04:42:34.476722521Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 16 04:42:34.480620 containerd[1865]: time="2025-09-16T04:42:34.480577942Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:34.486260 containerd[1865]: time="2025-09-16T04:42:34.486216699Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:34.488046 containerd[1865]: time="2025-09-16T04:42:34.488000097Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.636104431s" Sep 16 04:42:34.490630 containerd[1865]: time="2025-09-16T04:42:34.489047249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 16 04:42:34.492159 containerd[1865]: time="2025-09-16T04:42:34.492138368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 16 04:42:34.495549 containerd[1865]: time="2025-09-16T04:42:34.495525383Z" level=info msg="CreateContainer within sandbox \"37a481d6c7fbdd02af5755e828815d90a2d49bd2a5d5669e92601c6da8b8b5d2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 16 04:42:34.533883 containerd[1865]: time="2025-09-16T04:42:34.533796833Z" level=info msg="Container 3faf83302fc9b0d0d917f1da497a4fd02f043d72916f4f2f057349ab7207ab7f: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:42:34.569866 containerd[1865]: time="2025-09-16T04:42:34.569826158Z" level=info msg="CreateContainer within sandbox \"37a481d6c7fbdd02af5755e828815d90a2d49bd2a5d5669e92601c6da8b8b5d2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3faf83302fc9b0d0d917f1da497a4fd02f043d72916f4f2f057349ab7207ab7f\"" Sep 16 04:42:34.572722 containerd[1865]: time="2025-09-16T04:42:34.571788642Z" level=info msg="StartContainer for \"3faf83302fc9b0d0d917f1da497a4fd02f043d72916f4f2f057349ab7207ab7f\"" Sep 16 04:42:34.573101 containerd[1865]: time="2025-09-16T04:42:34.573055185Z" level=info msg="connecting to shim 3faf83302fc9b0d0d917f1da497a4fd02f043d72916f4f2f057349ab7207ab7f" address="unix:///run/containerd/s/9b0acbd702e74908cbb770d5181ad14ea43ef85703c74eacb0fc610e1f4acb93" protocol=ttrpc version=3 Sep 16 04:42:34.607048 systemd[1]: Started cri-containerd-3faf83302fc9b0d0d917f1da497a4fd02f043d72916f4f2f057349ab7207ab7f.scope - libcontainer container 3faf83302fc9b0d0d917f1da497a4fd02f043d72916f4f2f057349ab7207ab7f. Sep 16 04:42:34.668873 containerd[1865]: time="2025-09-16T04:42:34.668837632Z" level=info msg="StartContainer for \"3faf83302fc9b0d0d917f1da497a4fd02f043d72916f4f2f057349ab7207ab7f\" returns successfully" Sep 16 04:42:34.968169 kubelet[3280]: I0916 04:42:34.967463 3280 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 16 04:42:34.968169 kubelet[3280]: I0916 04:42:34.967956 3280 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 16 04:42:35.167303 kubelet[3280]: I0916 04:42:35.166905 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-kp8nh" podStartSLOduration=23.79355899 podStartE2EDuration="33.166887342s" podCreationTimestamp="2025-09-16 04:42:02 +0000 UTC" firstStartedPulling="2025-09-16 04:42:25.11847863 +0000 UTC m=+40.362664102" lastFinishedPulling="2025-09-16 04:42:34.491806974 +0000 UTC m=+49.735992454" observedRunningTime="2025-09-16 04:42:35.165543589 +0000 UTC m=+50.409729069" watchObservedRunningTime="2025-09-16 04:42:35.166887342 +0000 UTC m=+50.411072822" Sep 16 04:42:35.246672 containerd[1865]: time="2025-09-16T04:42:35.246271248Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1f08e9c24789e25c7bb9b1a3c69d1fa9201d4793e1c9de4574b81a0f834b0b1\" id:\"f0bef33bfbf989bd011531f8a5c1916672773f3c39cb2c4611bbd0173f07d266\" pid:5799 exit_status:1 exited_at:{seconds:1757997755 nanos:245847683}" Sep 16 04:42:39.095466 containerd[1865]: time="2025-09-16T04:42:39.095412017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:39.098407 containerd[1865]: time="2025-09-16T04:42:39.098306050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 16 04:42:39.102508 containerd[1865]: time="2025-09-16T04:42:39.101905528Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:39.106016 containerd[1865]: time="2025-09-16T04:42:39.105978756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:42:39.106513 containerd[1865]: time="2025-09-16T04:42:39.106470691Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 4.61416315s" Sep 16 04:42:39.106597 containerd[1865]: time="2025-09-16T04:42:39.106582623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 16 04:42:39.129721 containerd[1865]: time="2025-09-16T04:42:39.129648304Z" level=info msg="CreateContainer within sandbox \"281c130e1bd1134290218a6212b2164613b4769255157d6a38d41c0f86331934\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 16 04:42:39.151816 containerd[1865]: time="2025-09-16T04:42:39.151785852Z" level=info msg="Container e15f6881385296be1473cd7ced5095e431d6a3fac2873178e81ee662b51c66e1: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:42:39.173309 containerd[1865]: time="2025-09-16T04:42:39.173255020Z" level=info msg="CreateContainer within sandbox \"281c130e1bd1134290218a6212b2164613b4769255157d6a38d41c0f86331934\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e15f6881385296be1473cd7ced5095e431d6a3fac2873178e81ee662b51c66e1\"" Sep 16 04:42:39.175186 containerd[1865]: time="2025-09-16T04:42:39.175159863Z" level=info msg="StartContainer for \"e15f6881385296be1473cd7ced5095e431d6a3fac2873178e81ee662b51c66e1\"" Sep 16 04:42:39.176109 containerd[1865]: time="2025-09-16T04:42:39.176069042Z" level=info msg="connecting to shim e15f6881385296be1473cd7ced5095e431d6a3fac2873178e81ee662b51c66e1" address="unix:///run/containerd/s/73a2c1b798cdda8c75de6024cff132a6987084dc969a5c014f777288e47b378f" protocol=ttrpc version=3 Sep 16 04:42:39.202746 systemd[1]: Started cri-containerd-e15f6881385296be1473cd7ced5095e431d6a3fac2873178e81ee662b51c66e1.scope - libcontainer container e15f6881385296be1473cd7ced5095e431d6a3fac2873178e81ee662b51c66e1. Sep 16 04:42:39.246835 containerd[1865]: time="2025-09-16T04:42:39.246785660Z" level=info msg="StartContainer for \"e15f6881385296be1473cd7ced5095e431d6a3fac2873178e81ee662b51c66e1\" returns successfully" Sep 16 04:42:40.158457 kubelet[3280]: I0916 04:42:40.157972 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6b6466f845-jd2sh" podStartSLOduration=26.561902061 podStartE2EDuration="38.157959243s" podCreationTimestamp="2025-09-16 04:42:02 +0000 UTC" firstStartedPulling="2025-09-16 04:42:27.513622743 +0000 UTC m=+42.757808215" lastFinishedPulling="2025-09-16 04:42:39.109679925 +0000 UTC m=+54.353865397" observedRunningTime="2025-09-16 04:42:40.157378666 +0000 UTC m=+55.401564146" watchObservedRunningTime="2025-09-16 04:42:40.157959243 +0000 UTC m=+55.402144715" Sep 16 04:42:40.170941 containerd[1865]: time="2025-09-16T04:42:40.170908095Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e15f6881385296be1473cd7ced5095e431d6a3fac2873178e81ee662b51c66e1\" id:\"0b05366ae57b658a9fce10ebf9efa78ffd37b36bf6b9822bdf0ab068c4092d72\" pid:5879 exited_at:{seconds:1757997760 nanos:170729506}" Sep 16 04:42:54.588915 containerd[1865]: time="2025-09-16T04:42:54.588850988Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f4fee02196b9e4fd8828c2aa0a9ce23a2673f599d1b90bd498ee291154737863\" id:\"70d44609e5526f1f538d348f6986f4f801c10b4bb3589802c5e967f7dd64c850\" pid:5907 exited_at:{seconds:1757997774 nanos:587708353}" Sep 16 04:42:58.564090 kubelet[3280]: I0916 04:42:58.564042 3280 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 16 04:42:58.642221 containerd[1865]: time="2025-09-16T04:42:58.641921452Z" level=info msg="StopContainer for \"db2a160228cf3fedd6a61790c07f79254c62e63ac7788214a699f74c90af05e9\" with timeout 30 (s)" Sep 16 04:42:58.643422 containerd[1865]: time="2025-09-16T04:42:58.643250141Z" level=info msg="Stop container \"db2a160228cf3fedd6a61790c07f79254c62e63ac7788214a699f74c90af05e9\" with signal terminated" Sep 16 04:42:58.688158 systemd[1]: Created slice kubepods-besteffort-pod2a9e78b8_2cb2_4023_b568_608e54b75650.slice - libcontainer container kubepods-besteffort-pod2a9e78b8_2cb2_4023_b568_608e54b75650.slice. Sep 16 04:42:58.693399 systemd[1]: cri-containerd-db2a160228cf3fedd6a61790c07f79254c62e63ac7788214a699f74c90af05e9.scope: Deactivated successfully. Sep 16 04:42:58.694909 containerd[1865]: time="2025-09-16T04:42:58.694806879Z" level=info msg="received exit event container_id:\"db2a160228cf3fedd6a61790c07f79254c62e63ac7788214a699f74c90af05e9\" id:\"db2a160228cf3fedd6a61790c07f79254c62e63ac7788214a699f74c90af05e9\" pid:5639 exit_status:1 exited_at:{seconds:1757997778 nanos:694371313}" Sep 16 04:42:58.695137 containerd[1865]: time="2025-09-16T04:42:58.695116304Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db2a160228cf3fedd6a61790c07f79254c62e63ac7788214a699f74c90af05e9\" id:\"db2a160228cf3fedd6a61790c07f79254c62e63ac7788214a699f74c90af05e9\" pid:5639 exit_status:1 exited_at:{seconds:1757997778 nanos:694371313}" Sep 16 04:42:58.695736 systemd[1]: cri-containerd-db2a160228cf3fedd6a61790c07f79254c62e63ac7788214a699f74c90af05e9.scope: Consumed 1.049s CPU time, 49.3M memory peak, 828K read from disk. Sep 16 04:42:58.731973 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db2a160228cf3fedd6a61790c07f79254c62e63ac7788214a699f74c90af05e9-rootfs.mount: Deactivated successfully. Sep 16 04:42:58.806740 kubelet[3280]: I0916 04:42:58.806659 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtrxk\" (UniqueName: \"kubernetes.io/projected/2a9e78b8-2cb2-4023-b568-608e54b75650-kube-api-access-gtrxk\") pod \"calico-apiserver-68cdb9d4d4-ntvmb\" (UID: \"2a9e78b8-2cb2-4023-b568-608e54b75650\") " pod="calico-apiserver/calico-apiserver-68cdb9d4d4-ntvmb" Sep 16 04:42:58.806740 kubelet[3280]: I0916 04:42:58.806718 3280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2a9e78b8-2cb2-4023-b568-608e54b75650-calico-apiserver-certs\") pod \"calico-apiserver-68cdb9d4d4-ntvmb\" (UID: \"2a9e78b8-2cb2-4023-b568-608e54b75650\") " pod="calico-apiserver/calico-apiserver-68cdb9d4d4-ntvmb" Sep 16 04:42:58.992649 containerd[1865]: time="2025-09-16T04:42:58.992526750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68cdb9d4d4-ntvmb,Uid:2a9e78b8-2cb2-4023-b568-608e54b75650,Namespace:calico-apiserver,Attempt:0,}" Sep 16 04:42:59.912626 containerd[1865]: time="2025-09-16T04:42:59.912150499Z" level=info msg="StopContainer for \"db2a160228cf3fedd6a61790c07f79254c62e63ac7788214a699f74c90af05e9\" returns successfully" Sep 16 04:42:59.914054 containerd[1865]: time="2025-09-16T04:42:59.913206963Z" level=info msg="StopPodSandbox for \"fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec\"" Sep 16 04:42:59.925628 containerd[1865]: time="2025-09-16T04:42:59.924531076Z" level=info msg="Container to stop \"db2a160228cf3fedd6a61790c07f79254c62e63ac7788214a699f74c90af05e9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:42:59.933971 systemd[1]: cri-containerd-fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec.scope: Deactivated successfully. Sep 16 04:42:59.937216 containerd[1865]: time="2025-09-16T04:42:59.937162709Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec\" id:\"fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec\" pid:5250 exit_status:137 exited_at:{seconds:1757997779 nanos:936419302}" Sep 16 04:42:59.974780 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec-rootfs.mount: Deactivated successfully. Sep 16 04:42:59.976209 containerd[1865]: time="2025-09-16T04:42:59.976124015Z" level=info msg="shim disconnected" id=fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec namespace=k8s.io Sep 16 04:42:59.977049 containerd[1865]: time="2025-09-16T04:42:59.976174464Z" level=warning msg="cleaning up after shim disconnected" id=fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec namespace=k8s.io Sep 16 04:42:59.977143 containerd[1865]: time="2025-09-16T04:42:59.977127261Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 16 04:42:59.979568 containerd[1865]: time="2025-09-16T04:42:59.976748754Z" level=info msg="received exit event sandbox_id:\"fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec\" exit_status:137 exited_at:{seconds:1757997779 nanos:936419302}" Sep 16 04:42:59.992935 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec-shm.mount: Deactivated successfully. Sep 16 04:43:00.028599 systemd-networkd[1693]: cali7b0c8f37f6e: Link UP Sep 16 04:43:00.033732 systemd-networkd[1693]: cali7b0c8f37f6e: Gained carrier Sep 16 04:43:00.061024 containerd[1865]: 2025-09-16 04:42:59.922 [INFO][5943] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--68cdb9d4d4--ntvmb-eth0 calico-apiserver-68cdb9d4d4- calico-apiserver 2a9e78b8-2cb2-4023-b568-608e54b75650 1124 0 2025-09-16 04:42:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68cdb9d4d4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.0.0-n-404d4275b5 calico-apiserver-68cdb9d4d4-ntvmb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7b0c8f37f6e [] [] }} ContainerID="c54dcc75aa7ae1e199d069de3da1f0b08a8a39546d01b074b3d954aad82c47d9" Namespace="calico-apiserver" Pod="calico-apiserver-68cdb9d4d4-ntvmb" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--68cdb9d4d4--ntvmb-" Sep 16 04:43:00.061024 containerd[1865]: 2025-09-16 04:42:59.923 [INFO][5943] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c54dcc75aa7ae1e199d069de3da1f0b08a8a39546d01b074b3d954aad82c47d9" Namespace="calico-apiserver" Pod="calico-apiserver-68cdb9d4d4-ntvmb" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--68cdb9d4d4--ntvmb-eth0" Sep 16 04:43:00.061024 containerd[1865]: 2025-09-16 04:42:59.951 [INFO][5959] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c54dcc75aa7ae1e199d069de3da1f0b08a8a39546d01b074b3d954aad82c47d9" HandleID="k8s-pod-network.c54dcc75aa7ae1e199d069de3da1f0b08a8a39546d01b074b3d954aad82c47d9" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--68cdb9d4d4--ntvmb-eth0" Sep 16 04:43:00.061024 containerd[1865]: 2025-09-16 04:42:59.952 [INFO][5959] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c54dcc75aa7ae1e199d069de3da1f0b08a8a39546d01b074b3d954aad82c47d9" HandleID="k8s-pod-network.c54dcc75aa7ae1e199d069de3da1f0b08a8a39546d01b074b3d954aad82c47d9" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--68cdb9d4d4--ntvmb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b610), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.0.0-n-404d4275b5", "pod":"calico-apiserver-68cdb9d4d4-ntvmb", "timestamp":"2025-09-16 04:42:59.951879477 +0000 UTC"}, Hostname:"ci-4459.0.0-n-404d4275b5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 16 04:43:00.061024 containerd[1865]: 2025-09-16 04:42:59.952 [INFO][5959] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:43:00.061024 containerd[1865]: 2025-09-16 04:42:59.952 [INFO][5959] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:43:00.061024 containerd[1865]: 2025-09-16 04:42:59.952 [INFO][5959] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.0.0-n-404d4275b5' Sep 16 04:43:00.061024 containerd[1865]: 2025-09-16 04:42:59.959 [INFO][5959] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c54dcc75aa7ae1e199d069de3da1f0b08a8a39546d01b074b3d954aad82c47d9" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:43:00.061024 containerd[1865]: 2025-09-16 04:42:59.964 [INFO][5959] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.0.0-n-404d4275b5" Sep 16 04:43:00.061024 containerd[1865]: 2025-09-16 04:42:59.971 [INFO][5959] ipam/ipam.go 511: Trying affinity for 192.168.27.192/26 host="ci-4459.0.0-n-404d4275b5" Sep 16 04:43:00.061024 containerd[1865]: 2025-09-16 04:42:59.974 [INFO][5959] ipam/ipam.go 158: Attempting to load block cidr=192.168.27.192/26 host="ci-4459.0.0-n-404d4275b5" Sep 16 04:43:00.061024 containerd[1865]: 2025-09-16 04:42:59.980 [INFO][5959] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.27.192/26 host="ci-4459.0.0-n-404d4275b5" Sep 16 04:43:00.061024 containerd[1865]: 2025-09-16 04:42:59.980 [INFO][5959] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.27.192/26 handle="k8s-pod-network.c54dcc75aa7ae1e199d069de3da1f0b08a8a39546d01b074b3d954aad82c47d9" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:43:00.061024 containerd[1865]: 2025-09-16 04:42:59.982 [INFO][5959] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c54dcc75aa7ae1e199d069de3da1f0b08a8a39546d01b074b3d954aad82c47d9 Sep 16 04:43:00.061024 containerd[1865]: 2025-09-16 04:42:59.995 [INFO][5959] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.27.192/26 handle="k8s-pod-network.c54dcc75aa7ae1e199d069de3da1f0b08a8a39546d01b074b3d954aad82c47d9" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:43:00.061024 containerd[1865]: 2025-09-16 04:43:00.012 [INFO][5959] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.27.202/26] block=192.168.27.192/26 handle="k8s-pod-network.c54dcc75aa7ae1e199d069de3da1f0b08a8a39546d01b074b3d954aad82c47d9" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:43:00.061024 containerd[1865]: 2025-09-16 04:43:00.012 [INFO][5959] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.27.202/26] handle="k8s-pod-network.c54dcc75aa7ae1e199d069de3da1f0b08a8a39546d01b074b3d954aad82c47d9" host="ci-4459.0.0-n-404d4275b5" Sep 16 04:43:00.061024 containerd[1865]: 2025-09-16 04:43:00.012 [INFO][5959] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:43:00.061024 containerd[1865]: 2025-09-16 04:43:00.012 [INFO][5959] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.27.202/26] IPv6=[] ContainerID="c54dcc75aa7ae1e199d069de3da1f0b08a8a39546d01b074b3d954aad82c47d9" HandleID="k8s-pod-network.c54dcc75aa7ae1e199d069de3da1f0b08a8a39546d01b074b3d954aad82c47d9" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--68cdb9d4d4--ntvmb-eth0" Sep 16 04:43:00.062144 containerd[1865]: 2025-09-16 04:43:00.017 [INFO][5943] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c54dcc75aa7ae1e199d069de3da1f0b08a8a39546d01b074b3d954aad82c47d9" Namespace="calico-apiserver" Pod="calico-apiserver-68cdb9d4d4-ntvmb" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--68cdb9d4d4--ntvmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--68cdb9d4d4--ntvmb-eth0", GenerateName:"calico-apiserver-68cdb9d4d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"2a9e78b8-2cb2-4023-b568-608e54b75650", ResourceVersion:"1124", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 42, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68cdb9d4d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-404d4275b5", ContainerID:"", Pod:"calico-apiserver-68cdb9d4d4-ntvmb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.27.202/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7b0c8f37f6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:43:00.062144 containerd[1865]: 2025-09-16 04:43:00.017 [INFO][5943] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.27.202/32] ContainerID="c54dcc75aa7ae1e199d069de3da1f0b08a8a39546d01b074b3d954aad82c47d9" Namespace="calico-apiserver" Pod="calico-apiserver-68cdb9d4d4-ntvmb" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--68cdb9d4d4--ntvmb-eth0" Sep 16 04:43:00.062144 containerd[1865]: 2025-09-16 04:43:00.017 [INFO][5943] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b0c8f37f6e ContainerID="c54dcc75aa7ae1e199d069de3da1f0b08a8a39546d01b074b3d954aad82c47d9" Namespace="calico-apiserver" Pod="calico-apiserver-68cdb9d4d4-ntvmb" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--68cdb9d4d4--ntvmb-eth0" Sep 16 04:43:00.062144 containerd[1865]: 2025-09-16 04:43:00.033 [INFO][5943] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c54dcc75aa7ae1e199d069de3da1f0b08a8a39546d01b074b3d954aad82c47d9" Namespace="calico-apiserver" Pod="calico-apiserver-68cdb9d4d4-ntvmb" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--68cdb9d4d4--ntvmb-eth0" Sep 16 04:43:00.062144 containerd[1865]: 2025-09-16 04:43:00.036 [INFO][5943] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c54dcc75aa7ae1e199d069de3da1f0b08a8a39546d01b074b3d954aad82c47d9" Namespace="calico-apiserver" Pod="calico-apiserver-68cdb9d4d4-ntvmb" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--68cdb9d4d4--ntvmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--68cdb9d4d4--ntvmb-eth0", GenerateName:"calico-apiserver-68cdb9d4d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"2a9e78b8-2cb2-4023-b568-608e54b75650", ResourceVersion:"1124", Generation:0, CreationTimestamp:time.Date(2025, time.September, 16, 4, 42, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68cdb9d4d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.0.0-n-404d4275b5", ContainerID:"c54dcc75aa7ae1e199d069de3da1f0b08a8a39546d01b074b3d954aad82c47d9", Pod:"calico-apiserver-68cdb9d4d4-ntvmb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.27.202/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7b0c8f37f6e", MAC:"76:77:ce:fb:fc:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 16 04:43:00.062144 containerd[1865]: 2025-09-16 04:43:00.056 [INFO][5943] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c54dcc75aa7ae1e199d069de3da1f0b08a8a39546d01b074b3d954aad82c47d9" Namespace="calico-apiserver" Pod="calico-apiserver-68cdb9d4d4-ntvmb" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--68cdb9d4d4--ntvmb-eth0" Sep 16 04:43:00.086701 systemd-networkd[1693]: calif300e3586e8: Link DOWN Sep 16 04:43:00.087121 systemd-networkd[1693]: calif300e3586e8: Lost carrier Sep 16 04:43:00.125135 containerd[1865]: time="2025-09-16T04:43:00.125041540Z" level=info msg="connecting to shim c54dcc75aa7ae1e199d069de3da1f0b08a8a39546d01b074b3d954aad82c47d9" address="unix:///run/containerd/s/931e4fea4cff0d03f3f9c04e69405109a7107188662e3cb2e757f97875e2587e" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:43:00.158785 systemd[1]: Started cri-containerd-c54dcc75aa7ae1e199d069de3da1f0b08a8a39546d01b074b3d954aad82c47d9.scope - libcontainer container c54dcc75aa7ae1e199d069de3da1f0b08a8a39546d01b074b3d954aad82c47d9. Sep 16 04:43:00.187654 containerd[1865]: 2025-09-16 04:43:00.082 [INFO][6000] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" Sep 16 04:43:00.187654 containerd[1865]: 2025-09-16 04:43:00.083 [INFO][6000] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" iface="eth0" netns="/var/run/netns/cni-bad53563-b783-2290-e10f-e47028028a11" Sep 16 04:43:00.187654 containerd[1865]: 2025-09-16 04:43:00.084 [INFO][6000] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" iface="eth0" netns="/var/run/netns/cni-bad53563-b783-2290-e10f-e47028028a11" Sep 16 04:43:00.187654 containerd[1865]: 2025-09-16 04:43:00.092 [INFO][6000] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" after=8.894935ms iface="eth0" netns="/var/run/netns/cni-bad53563-b783-2290-e10f-e47028028a11" Sep 16 04:43:00.187654 containerd[1865]: 2025-09-16 04:43:00.092 [INFO][6000] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" Sep 16 04:43:00.187654 containerd[1865]: 2025-09-16 04:43:00.092 [INFO][6000] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" Sep 16 04:43:00.187654 containerd[1865]: 2025-09-16 04:43:00.116 [INFO][6033] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" HandleID="k8s-pod-network.fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--2bmc2-eth0" Sep 16 04:43:00.187654 containerd[1865]: 2025-09-16 04:43:00.118 [INFO][6033] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:43:00.187654 containerd[1865]: 2025-09-16 04:43:00.118 [INFO][6033] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:43:00.187654 containerd[1865]: 2025-09-16 04:43:00.179 [INFO][6033] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" HandleID="k8s-pod-network.fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--2bmc2-eth0" Sep 16 04:43:00.187654 containerd[1865]: 2025-09-16 04:43:00.179 [INFO][6033] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" HandleID="k8s-pod-network.fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--2bmc2-eth0" Sep 16 04:43:00.187654 containerd[1865]: 2025-09-16 04:43:00.182 [INFO][6033] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:43:00.187654 containerd[1865]: 2025-09-16 04:43:00.184 [INFO][6000] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" Sep 16 04:43:00.188294 containerd[1865]: time="2025-09-16T04:43:00.188029242Z" level=info msg="TearDown network for sandbox \"fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec\" successfully" Sep 16 04:43:00.188294 containerd[1865]: time="2025-09-16T04:43:00.188153446Z" level=info msg="StopPodSandbox for \"fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec\" returns successfully" Sep 16 04:43:00.209108 kubelet[3280]: I0916 04:43:00.208802 3280 scope.go:117] "RemoveContainer" containerID="db2a160228cf3fedd6a61790c07f79254c62e63ac7788214a699f74c90af05e9" Sep 16 04:43:00.214026 containerd[1865]: time="2025-09-16T04:43:00.213988472Z" level=info msg="RemoveContainer for \"db2a160228cf3fedd6a61790c07f79254c62e63ac7788214a699f74c90af05e9\"" Sep 16 04:43:00.245447 containerd[1865]: time="2025-09-16T04:43:00.245330450Z" level=info msg="RemoveContainer for \"db2a160228cf3fedd6a61790c07f79254c62e63ac7788214a699f74c90af05e9\" returns successfully" Sep 16 04:43:00.245688 kubelet[3280]: I0916 04:43:00.245554 3280 scope.go:117] "RemoveContainer" containerID="db2a160228cf3fedd6a61790c07f79254c62e63ac7788214a699f74c90af05e9" Sep 16 04:43:00.245950 containerd[1865]: time="2025-09-16T04:43:00.245880515Z" level=error msg="ContainerStatus for \"db2a160228cf3fedd6a61790c07f79254c62e63ac7788214a699f74c90af05e9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"db2a160228cf3fedd6a61790c07f79254c62e63ac7788214a699f74c90af05e9\": not found" Sep 16 04:43:00.246862 kubelet[3280]: E0916 04:43:00.246230 3280 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"db2a160228cf3fedd6a61790c07f79254c62e63ac7788214a699f74c90af05e9\": not found" containerID="db2a160228cf3fedd6a61790c07f79254c62e63ac7788214a699f74c90af05e9" Sep 16 04:43:00.247194 kubelet[3280]: I0916 04:43:00.247032 3280 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"db2a160228cf3fedd6a61790c07f79254c62e63ac7788214a699f74c90af05e9"} err="failed to get container status \"db2a160228cf3fedd6a61790c07f79254c62e63ac7788214a699f74c90af05e9\": rpc error: code = NotFound desc = an error occurred when try to find container \"db2a160228cf3fedd6a61790c07f79254c62e63ac7788214a699f74c90af05e9\": not found" Sep 16 04:43:00.259101 containerd[1865]: time="2025-09-16T04:43:00.259076797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68cdb9d4d4-ntvmb,Uid:2a9e78b8-2cb2-4023-b568-608e54b75650,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c54dcc75aa7ae1e199d069de3da1f0b08a8a39546d01b074b3d954aad82c47d9\"" Sep 16 04:43:00.262276 containerd[1865]: time="2025-09-16T04:43:00.262250181Z" level=info msg="CreateContainer within sandbox \"c54dcc75aa7ae1e199d069de3da1f0b08a8a39546d01b074b3d954aad82c47d9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 16 04:43:00.280476 containerd[1865]: time="2025-09-16T04:43:00.280446639Z" level=info msg="Container fbc344d6d911c2057b8d743a3cfc559b2e4a940bd0690bf80fbd1415c4450075: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:43:00.296274 containerd[1865]: time="2025-09-16T04:43:00.296239112Z" level=info msg="CreateContainer within sandbox \"c54dcc75aa7ae1e199d069de3da1f0b08a8a39546d01b074b3d954aad82c47d9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fbc344d6d911c2057b8d743a3cfc559b2e4a940bd0690bf80fbd1415c4450075\"" Sep 16 04:43:00.297051 containerd[1865]: time="2025-09-16T04:43:00.297029040Z" level=info msg="StartContainer for \"fbc344d6d911c2057b8d743a3cfc559b2e4a940bd0690bf80fbd1415c4450075\"" Sep 16 04:43:00.298555 containerd[1865]: time="2025-09-16T04:43:00.298528326Z" level=info msg="connecting to shim fbc344d6d911c2057b8d743a3cfc559b2e4a940bd0690bf80fbd1415c4450075" address="unix:///run/containerd/s/931e4fea4cff0d03f3f9c04e69405109a7107188662e3cb2e757f97875e2587e" protocol=ttrpc version=3 Sep 16 04:43:00.317077 kubelet[3280]: I0916 04:43:00.317047 3280 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wd4zz\" (UniqueName: \"kubernetes.io/projected/6fdb1262-eead-4398-8cae-b3e060954c4e-kube-api-access-wd4zz\") pod \"6fdb1262-eead-4398-8cae-b3e060954c4e\" (UID: \"6fdb1262-eead-4398-8cae-b3e060954c4e\") " Sep 16 04:43:00.317680 kubelet[3280]: I0916 04:43:00.317125 3280 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6fdb1262-eead-4398-8cae-b3e060954c4e-calico-apiserver-certs\") pod \"6fdb1262-eead-4398-8cae-b3e060954c4e\" (UID: \"6fdb1262-eead-4398-8cae-b3e060954c4e\") " Sep 16 04:43:00.324651 kubelet[3280]: I0916 04:43:00.324613 3280 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fdb1262-eead-4398-8cae-b3e060954c4e-kube-api-access-wd4zz" (OuterVolumeSpecName: "kube-api-access-wd4zz") pod "6fdb1262-eead-4398-8cae-b3e060954c4e" (UID: "6fdb1262-eead-4398-8cae-b3e060954c4e"). InnerVolumeSpecName "kube-api-access-wd4zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 04:43:00.325457 kubelet[3280]: I0916 04:43:00.324974 3280 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fdb1262-eead-4398-8cae-b3e060954c4e-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "6fdb1262-eead-4398-8cae-b3e060954c4e" (UID: "6fdb1262-eead-4398-8cae-b3e060954c4e"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 16 04:43:00.325787 systemd[1]: Started cri-containerd-fbc344d6d911c2057b8d743a3cfc559b2e4a940bd0690bf80fbd1415c4450075.scope - libcontainer container fbc344d6d911c2057b8d743a3cfc559b2e4a940bd0690bf80fbd1415c4450075. Sep 16 04:43:00.385250 containerd[1865]: time="2025-09-16T04:43:00.385205373Z" level=info msg="StartContainer for \"fbc344d6d911c2057b8d743a3cfc559b2e4a940bd0690bf80fbd1415c4450075\" returns successfully" Sep 16 04:43:00.419350 kubelet[3280]: I0916 04:43:00.419294 3280 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6fdb1262-eead-4398-8cae-b3e060954c4e-calico-apiserver-certs\") on node \"ci-4459.0.0-n-404d4275b5\" DevicePath \"\"" Sep 16 04:43:00.419350 kubelet[3280]: I0916 04:43:00.419323 3280 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wd4zz\" (UniqueName: \"kubernetes.io/projected/6fdb1262-eead-4398-8cae-b3e060954c4e-kube-api-access-wd4zz\") on node \"ci-4459.0.0-n-404d4275b5\" DevicePath \"\"" Sep 16 04:43:00.833939 systemd[1]: Removed slice kubepods-besteffort-pod6fdb1262_eead_4398_8cae_b3e060954c4e.slice - libcontainer container kubepods-besteffort-pod6fdb1262_eead_4398_8cae_b3e060954c4e.slice. Sep 16 04:43:00.834012 systemd[1]: kubepods-besteffort-pod6fdb1262_eead_4398_8cae_b3e060954c4e.slice: Consumed 1.063s CPU time, 49.6M memory peak, 828K read from disk. Sep 16 04:43:00.887390 systemd[1]: run-netns-cni\x2dbad53563\x2db783\x2d2290\x2de10f\x2de47028028a11.mount: Deactivated successfully. Sep 16 04:43:00.887491 systemd[1]: var-lib-kubelet-pods-6fdb1262\x2deead\x2d4398\x2d8cae\x2db3e060954c4e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwd4zz.mount: Deactivated successfully. Sep 16 04:43:00.887538 systemd[1]: var-lib-kubelet-pods-6fdb1262\x2deead\x2d4398\x2d8cae\x2db3e060954c4e-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 16 04:43:01.256081 kubelet[3280]: I0916 04:43:01.255625 3280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-68cdb9d4d4-ntvmb" podStartSLOduration=3.255597695 podStartE2EDuration="3.255597695s" podCreationTimestamp="2025-09-16 04:42:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:43:01.234646625 +0000 UTC m=+76.478832097" watchObservedRunningTime="2025-09-16 04:43:01.255597695 +0000 UTC m=+76.499783167" Sep 16 04:43:01.872768 systemd-networkd[1693]: cali7b0c8f37f6e: Gained IPv6LL Sep 16 04:43:02.492082 containerd[1865]: time="2025-09-16T04:43:02.492036184Z" level=info msg="StopContainer for \"4be50cac89754f3ba5abacda604e5af865880ddc7c14db2fe297515c9da32a0d\" with timeout 30 (s)" Sep 16 04:43:02.492849 containerd[1865]: time="2025-09-16T04:43:02.492820504Z" level=info msg="Stop container \"4be50cac89754f3ba5abacda604e5af865880ddc7c14db2fe297515c9da32a0d\" with signal terminated" Sep 16 04:43:02.510583 systemd[1]: cri-containerd-4be50cac89754f3ba5abacda604e5af865880ddc7c14db2fe297515c9da32a0d.scope: Deactivated successfully. Sep 16 04:43:02.512832 containerd[1865]: time="2025-09-16T04:43:02.512554033Z" level=info msg="received exit event container_id:\"4be50cac89754f3ba5abacda604e5af865880ddc7c14db2fe297515c9da32a0d\" id:\"4be50cac89754f3ba5abacda604e5af865880ddc7c14db2fe297515c9da32a0d\" pid:5674 exit_status:1 exited_at:{seconds:1757997782 nanos:512274009}" Sep 16 04:43:02.513390 containerd[1865]: time="2025-09-16T04:43:02.513239374Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4be50cac89754f3ba5abacda604e5af865880ddc7c14db2fe297515c9da32a0d\" id:\"4be50cac89754f3ba5abacda604e5af865880ddc7c14db2fe297515c9da32a0d\" pid:5674 exit_status:1 exited_at:{seconds:1757997782 nanos:512274009}" Sep 16 04:43:02.545566 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4be50cac89754f3ba5abacda604e5af865880ddc7c14db2fe297515c9da32a0d-rootfs.mount: Deactivated successfully. Sep 16 04:43:02.608964 containerd[1865]: time="2025-09-16T04:43:02.608868797Z" level=info msg="StopContainer for \"4be50cac89754f3ba5abacda604e5af865880ddc7c14db2fe297515c9da32a0d\" returns successfully" Sep 16 04:43:02.609689 containerd[1865]: time="2025-09-16T04:43:02.609660653Z" level=info msg="StopPodSandbox for \"aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23\"" Sep 16 04:43:02.609969 containerd[1865]: time="2025-09-16T04:43:02.609801770Z" level=info msg="Container to stop \"4be50cac89754f3ba5abacda604e5af865880ddc7c14db2fe297515c9da32a0d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 04:43:02.618081 systemd[1]: cri-containerd-aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23.scope: Deactivated successfully. Sep 16 04:43:02.619464 containerd[1865]: time="2025-09-16T04:43:02.619408990Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23\" id:\"aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23\" pid:5297 exit_status:137 exited_at:{seconds:1757997782 nanos:618922015}" Sep 16 04:43:02.647306 containerd[1865]: time="2025-09-16T04:43:02.647162939Z" level=info msg="shim disconnected" id=aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23 namespace=k8s.io Sep 16 04:43:02.647495 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23-rootfs.mount: Deactivated successfully. Sep 16 04:43:02.648535 containerd[1865]: time="2025-09-16T04:43:02.647265270Z" level=warning msg="cleaning up after shim disconnected" id=aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23 namespace=k8s.io Sep 16 04:43:02.648767 containerd[1865]: time="2025-09-16T04:43:02.648496444Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 16 04:43:02.675301 containerd[1865]: time="2025-09-16T04:43:02.675250602Z" level=info msg="received exit event sandbox_id:\"aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23\" exit_status:137 exited_at:{seconds:1757997782 nanos:618922015}" Sep 16 04:43:02.680103 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23-shm.mount: Deactivated successfully. Sep 16 04:43:02.728659 systemd-networkd[1693]: cali45c660ab8c2: Link DOWN Sep 16 04:43:02.728665 systemd-networkd[1693]: cali45c660ab8c2: Lost carrier Sep 16 04:43:02.804829 containerd[1865]: 2025-09-16 04:43:02.727 [INFO][6201] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" Sep 16 04:43:02.804829 containerd[1865]: 2025-09-16 04:43:02.727 [INFO][6201] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" iface="eth0" netns="/var/run/netns/cni-f148ce85-d512-6a4b-412b-124c81e13acc" Sep 16 04:43:02.804829 containerd[1865]: 2025-09-16 04:43:02.728 [INFO][6201] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" iface="eth0" netns="/var/run/netns/cni-f148ce85-d512-6a4b-412b-124c81e13acc" Sep 16 04:43:02.804829 containerd[1865]: 2025-09-16 04:43:02.735 [INFO][6201] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" after=7.004293ms iface="eth0" netns="/var/run/netns/cni-f148ce85-d512-6a4b-412b-124c81e13acc" Sep 16 04:43:02.804829 containerd[1865]: 2025-09-16 04:43:02.735 [INFO][6201] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" Sep 16 04:43:02.804829 containerd[1865]: 2025-09-16 04:43:02.735 [INFO][6201] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" Sep 16 04:43:02.804829 containerd[1865]: 2025-09-16 04:43:02.756 [INFO][6209] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" HandleID="k8s-pod-network.aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--8h62f-eth0" Sep 16 04:43:02.804829 containerd[1865]: 2025-09-16 04:43:02.757 [INFO][6209] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:43:02.804829 containerd[1865]: 2025-09-16 04:43:02.757 [INFO][6209] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:43:02.804829 containerd[1865]: 2025-09-16 04:43:02.795 [INFO][6209] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" HandleID="k8s-pod-network.aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--8h62f-eth0" Sep 16 04:43:02.804829 containerd[1865]: 2025-09-16 04:43:02.795 [INFO][6209] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" HandleID="k8s-pod-network.aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--8h62f-eth0" Sep 16 04:43:02.804829 containerd[1865]: 2025-09-16 04:43:02.798 [INFO][6209] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:43:02.804829 containerd[1865]: 2025-09-16 04:43:02.799 [INFO][6201] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" Sep 16 04:43:02.805316 containerd[1865]: time="2025-09-16T04:43:02.805275609Z" level=info msg="TearDown network for sandbox \"aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23\" successfully" Sep 16 04:43:02.805316 containerd[1865]: time="2025-09-16T04:43:02.805313178Z" level=info msg="StopPodSandbox for \"aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23\" returns successfully" Sep 16 04:43:02.807560 systemd[1]: run-netns-cni\x2df148ce85\x2dd512\x2d6a4b\x2d412b\x2d124c81e13acc.mount: Deactivated successfully. Sep 16 04:43:02.828653 kubelet[3280]: I0916 04:43:02.827084 3280 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fdb1262-eead-4398-8cae-b3e060954c4e" path="/var/lib/kubelet/pods/6fdb1262-eead-4398-8cae-b3e060954c4e/volumes" Sep 16 04:43:02.938379 kubelet[3280]: I0916 04:43:02.938344 3280 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hwxd\" (UniqueName: \"kubernetes.io/projected/c870852e-7bb4-4f75-b61b-79ac5929ece7-kube-api-access-6hwxd\") pod \"c870852e-7bb4-4f75-b61b-79ac5929ece7\" (UID: \"c870852e-7bb4-4f75-b61b-79ac5929ece7\") " Sep 16 04:43:02.938895 kubelet[3280]: I0916 04:43:02.938880 3280 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c870852e-7bb4-4f75-b61b-79ac5929ece7-calico-apiserver-certs\") pod \"c870852e-7bb4-4f75-b61b-79ac5929ece7\" (UID: \"c870852e-7bb4-4f75-b61b-79ac5929ece7\") " Sep 16 04:43:02.942252 systemd[1]: var-lib-kubelet-pods-c870852e\x2d7bb4\x2d4f75\x2db61b\x2d79ac5929ece7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6hwxd.mount: Deactivated successfully. Sep 16 04:43:02.944298 kubelet[3280]: I0916 04:43:02.944194 3280 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c870852e-7bb4-4f75-b61b-79ac5929ece7-kube-api-access-6hwxd" (OuterVolumeSpecName: "kube-api-access-6hwxd") pod "c870852e-7bb4-4f75-b61b-79ac5929ece7" (UID: "c870852e-7bb4-4f75-b61b-79ac5929ece7"). InnerVolumeSpecName "kube-api-access-6hwxd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 04:43:02.946029 kubelet[3280]: I0916 04:43:02.946004 3280 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c870852e-7bb4-4f75-b61b-79ac5929ece7-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "c870852e-7bb4-4f75-b61b-79ac5929ece7" (UID: "c870852e-7bb4-4f75-b61b-79ac5929ece7"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 16 04:43:03.040038 kubelet[3280]: I0916 04:43:03.039979 3280 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6hwxd\" (UniqueName: \"kubernetes.io/projected/c870852e-7bb4-4f75-b61b-79ac5929ece7-kube-api-access-6hwxd\") on node \"ci-4459.0.0-n-404d4275b5\" DevicePath \"\"" Sep 16 04:43:03.040038 kubelet[3280]: I0916 04:43:03.040014 3280 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c870852e-7bb4-4f75-b61b-79ac5929ece7-calico-apiserver-certs\") on node \"ci-4459.0.0-n-404d4275b5\" DevicePath \"\"" Sep 16 04:43:03.224723 kubelet[3280]: I0916 04:43:03.224675 3280 scope.go:117] "RemoveContainer" containerID="4be50cac89754f3ba5abacda604e5af865880ddc7c14db2fe297515c9da32a0d" Sep 16 04:43:03.226354 containerd[1865]: time="2025-09-16T04:43:03.226322491Z" level=info msg="RemoveContainer for \"4be50cac89754f3ba5abacda604e5af865880ddc7c14db2fe297515c9da32a0d\"" Sep 16 04:43:03.230520 systemd[1]: Removed slice kubepods-besteffort-podc870852e_7bb4_4f75_b61b_79ac5929ece7.slice - libcontainer container kubepods-besteffort-podc870852e_7bb4_4f75_b61b_79ac5929ece7.slice. Sep 16 04:43:03.236221 containerd[1865]: time="2025-09-16T04:43:03.236189735Z" level=info msg="RemoveContainer for \"4be50cac89754f3ba5abacda604e5af865880ddc7c14db2fe297515c9da32a0d\" returns successfully" Sep 16 04:43:03.236929 kubelet[3280]: I0916 04:43:03.236906 3280 scope.go:117] "RemoveContainer" containerID="4be50cac89754f3ba5abacda604e5af865880ddc7c14db2fe297515c9da32a0d" Sep 16 04:43:03.237304 containerd[1865]: time="2025-09-16T04:43:03.237274848Z" level=error msg="ContainerStatus for \"4be50cac89754f3ba5abacda604e5af865880ddc7c14db2fe297515c9da32a0d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4be50cac89754f3ba5abacda604e5af865880ddc7c14db2fe297515c9da32a0d\": not found" Sep 16 04:43:03.237408 kubelet[3280]: E0916 04:43:03.237388 3280 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4be50cac89754f3ba5abacda604e5af865880ddc7c14db2fe297515c9da32a0d\": not found" containerID="4be50cac89754f3ba5abacda604e5af865880ddc7c14db2fe297515c9da32a0d" Sep 16 04:43:03.237497 kubelet[3280]: I0916 04:43:03.237413 3280 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4be50cac89754f3ba5abacda604e5af865880ddc7c14db2fe297515c9da32a0d"} err="failed to get container status \"4be50cac89754f3ba5abacda604e5af865880ddc7c14db2fe297515c9da32a0d\": rpc error: code = NotFound desc = an error occurred when try to find container \"4be50cac89754f3ba5abacda604e5af865880ddc7c14db2fe297515c9da32a0d\": not found" Sep 16 04:43:03.547652 systemd[1]: var-lib-kubelet-pods-c870852e\x2d7bb4\x2d4f75\x2db61b\x2d79ac5929ece7-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 16 04:43:03.861324 containerd[1865]: time="2025-09-16T04:43:03.861108456Z" level=info msg="TaskExit event in podsandbox handler exit_status:137 exited_at:{seconds:1757997782 nanos:618922015}" Sep 16 04:43:04.826080 kubelet[3280]: I0916 04:43:04.826044 3280 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c870852e-7bb4-4f75-b61b-79ac5929ece7" path="/var/lib/kubelet/pods/c870852e-7bb4-4f75-b61b-79ac5929ece7/volumes" Sep 16 04:43:05.192399 containerd[1865]: time="2025-09-16T04:43:05.192123625Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1f08e9c24789e25c7bb9b1a3c69d1fa9201d4793e1c9de4574b81a0f834b0b1\" id:\"98e7412d4e16a644401849d675627d38a79f5b0d55ae50feb5db327edde2ee7b\" pid:6237 exited_at:{seconds:1757997785 nanos:191828224}" Sep 16 04:43:10.169057 containerd[1865]: time="2025-09-16T04:43:10.168992665Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e15f6881385296be1473cd7ced5095e431d6a3fac2873178e81ee662b51c66e1\" id:\"68e5a555789c84022b9609ce9e7c3c0538c682266a1ec424f47a4d4da6c23e53\" pid:6262 exited_at:{seconds:1757997790 nanos:168751794}" Sep 16 04:43:12.078951 containerd[1865]: time="2025-09-16T04:43:12.078906101Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e15f6881385296be1473cd7ced5095e431d6a3fac2873178e81ee662b51c66e1\" id:\"93d05308cb2e4784f8c870dc6a6335501f69689bc480e88bba1678d997a34096\" pid:6283 exited_at:{seconds:1757997792 nanos:78592099}" Sep 16 04:43:24.515761 containerd[1865]: time="2025-09-16T04:43:24.515709080Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f4fee02196b9e4fd8828c2aa0a9ce23a2673f599d1b90bd498ee291154737863\" id:\"5f6a81c339e9ed40c4756536d567e158d9a4f7e44e3f8e25266ba615e230739f\" pid:6313 exited_at:{seconds:1757997804 nanos:515181456}" Sep 16 04:43:32.653806 containerd[1865]: time="2025-09-16T04:43:32.653765995Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1f08e9c24789e25c7bb9b1a3c69d1fa9201d4793e1c9de4574b81a0f834b0b1\" id:\"afc5e98df0c0fa75e54e2850d868b2bcdfcbd186b9dfeabed4eb900015f504e8\" pid:6337 exited_at:{seconds:1757997812 nanos:653478938}" Sep 16 04:43:35.177399 containerd[1865]: time="2025-09-16T04:43:35.177357682Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1f08e9c24789e25c7bb9b1a3c69d1fa9201d4793e1c9de4574b81a0f834b0b1\" id:\"63f44aef9db3aeae7d01c7f57b7f4d523d020311736610f9d9de241ff5959a04\" pid:6360 exited_at:{seconds:1757997815 nanos:177090578}" Sep 16 04:43:40.168656 containerd[1865]: time="2025-09-16T04:43:40.168601334Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e15f6881385296be1473cd7ced5095e431d6a3fac2873178e81ee662b51c66e1\" id:\"828be96145b0ceb945ef2579781e6936c845e16d8e78c65746567fbea46da737\" pid:6388 exited_at:{seconds:1757997820 nanos:168397464}" Sep 16 04:43:44.839694 containerd[1865]: time="2025-09-16T04:43:44.839641428Z" level=info msg="StopPodSandbox for \"aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23\"" Sep 16 04:43:44.895664 containerd[1865]: 2025-09-16 04:43:44.864 [WARNING][6410] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--8h62f-eth0" Sep 16 04:43:44.895664 containerd[1865]: 2025-09-16 04:43:44.865 [INFO][6410] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" Sep 16 04:43:44.895664 containerd[1865]: 2025-09-16 04:43:44.865 [INFO][6410] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" iface="eth0" netns="" Sep 16 04:43:44.895664 containerd[1865]: 2025-09-16 04:43:44.865 [INFO][6410] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" Sep 16 04:43:44.895664 containerd[1865]: 2025-09-16 04:43:44.865 [INFO][6410] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" Sep 16 04:43:44.895664 containerd[1865]: 2025-09-16 04:43:44.885 [INFO][6417] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" HandleID="k8s-pod-network.aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--8h62f-eth0" Sep 16 04:43:44.895664 containerd[1865]: 2025-09-16 04:43:44.885 [INFO][6417] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:43:44.895664 containerd[1865]: 2025-09-16 04:43:44.886 [INFO][6417] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:43:44.895664 containerd[1865]: 2025-09-16 04:43:44.892 [WARNING][6417] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" HandleID="k8s-pod-network.aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--8h62f-eth0" Sep 16 04:43:44.895664 containerd[1865]: 2025-09-16 04:43:44.892 [INFO][6417] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" HandleID="k8s-pod-network.aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--8h62f-eth0" Sep 16 04:43:44.895664 containerd[1865]: 2025-09-16 04:43:44.893 [INFO][6417] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:43:44.895664 containerd[1865]: 2025-09-16 04:43:44.894 [INFO][6410] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" Sep 16 04:43:44.896265 containerd[1865]: time="2025-09-16T04:43:44.895705340Z" level=info msg="TearDown network for sandbox \"aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23\" successfully" Sep 16 04:43:44.896265 containerd[1865]: time="2025-09-16T04:43:44.895730364Z" level=info msg="StopPodSandbox for \"aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23\" returns successfully" Sep 16 04:43:44.896649 containerd[1865]: time="2025-09-16T04:43:44.896623175Z" level=info msg="RemovePodSandbox for \"aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23\"" Sep 16 04:43:44.896707 containerd[1865]: time="2025-09-16T04:43:44.896665992Z" level=info msg="Forcibly stopping sandbox \"aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23\"" Sep 16 04:43:44.953048 containerd[1865]: 2025-09-16 04:43:44.928 [WARNING][6432] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--8h62f-eth0" Sep 16 04:43:44.953048 containerd[1865]: 2025-09-16 04:43:44.929 [INFO][6432] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" Sep 16 04:43:44.953048 containerd[1865]: 2025-09-16 04:43:44.929 [INFO][6432] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" iface="eth0" netns="" Sep 16 04:43:44.953048 containerd[1865]: 2025-09-16 04:43:44.929 [INFO][6432] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" Sep 16 04:43:44.953048 containerd[1865]: 2025-09-16 04:43:44.929 [INFO][6432] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" Sep 16 04:43:44.953048 containerd[1865]: 2025-09-16 04:43:44.941 [INFO][6439] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" HandleID="k8s-pod-network.aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--8h62f-eth0" Sep 16 04:43:44.953048 containerd[1865]: 2025-09-16 04:43:44.941 [INFO][6439] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:43:44.953048 containerd[1865]: 2025-09-16 04:43:44.941 [INFO][6439] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:43:44.953048 containerd[1865]: 2025-09-16 04:43:44.946 [WARNING][6439] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" HandleID="k8s-pod-network.aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--8h62f-eth0" Sep 16 04:43:44.953048 containerd[1865]: 2025-09-16 04:43:44.946 [INFO][6439] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" HandleID="k8s-pod-network.aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--8h62f-eth0" Sep 16 04:43:44.953048 containerd[1865]: 2025-09-16 04:43:44.950 [INFO][6439] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:43:44.953048 containerd[1865]: 2025-09-16 04:43:44.952 [INFO][6432] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23" Sep 16 04:43:44.953322 containerd[1865]: time="2025-09-16T04:43:44.953080738Z" level=info msg="TearDown network for sandbox \"aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23\" successfully" Sep 16 04:43:44.954368 containerd[1865]: time="2025-09-16T04:43:44.954345895Z" level=info msg="Ensure that sandbox aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23 in task-service has been cleanup successfully" Sep 16 04:43:44.969242 containerd[1865]: time="2025-09-16T04:43:44.969212132Z" level=info msg="RemovePodSandbox \"aaf37971be8de935833d69161b5d6d018b96300108de4defde7787687eaddc23\" returns successfully" Sep 16 04:43:44.969786 containerd[1865]: time="2025-09-16T04:43:44.969649320Z" level=info msg="StopPodSandbox for \"fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec\"" Sep 16 04:43:45.016791 containerd[1865]: 2025-09-16 04:43:44.993 [WARNING][6453] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--2bmc2-eth0" Sep 16 04:43:45.016791 containerd[1865]: 2025-09-16 04:43:44.993 [INFO][6453] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" Sep 16 04:43:45.016791 containerd[1865]: 2025-09-16 04:43:44.993 [INFO][6453] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" iface="eth0" netns="" Sep 16 04:43:45.016791 containerd[1865]: 2025-09-16 04:43:44.993 [INFO][6453] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" Sep 16 04:43:45.016791 containerd[1865]: 2025-09-16 04:43:44.993 [INFO][6453] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" Sep 16 04:43:45.016791 containerd[1865]: 2025-09-16 04:43:45.008 [INFO][6461] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" HandleID="k8s-pod-network.fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--2bmc2-eth0" Sep 16 04:43:45.016791 containerd[1865]: 2025-09-16 04:43:45.008 [INFO][6461] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:43:45.016791 containerd[1865]: 2025-09-16 04:43:45.008 [INFO][6461] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:43:45.016791 containerd[1865]: 2025-09-16 04:43:45.012 [WARNING][6461] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" HandleID="k8s-pod-network.fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--2bmc2-eth0" Sep 16 04:43:45.016791 containerd[1865]: 2025-09-16 04:43:45.012 [INFO][6461] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" HandleID="k8s-pod-network.fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--2bmc2-eth0" Sep 16 04:43:45.016791 containerd[1865]: 2025-09-16 04:43:45.014 [INFO][6461] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:43:45.016791 containerd[1865]: 2025-09-16 04:43:45.015 [INFO][6453] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" Sep 16 04:43:45.017532 containerd[1865]: time="2025-09-16T04:43:45.016764001Z" level=info msg="TearDown network for sandbox \"fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec\" successfully" Sep 16 04:43:45.017532 containerd[1865]: time="2025-09-16T04:43:45.016959750Z" level=info msg="StopPodSandbox for \"fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec\" returns successfully" Sep 16 04:43:45.018023 containerd[1865]: time="2025-09-16T04:43:45.017727677Z" level=info msg="RemovePodSandbox for \"fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec\"" Sep 16 04:43:45.018023 containerd[1865]: time="2025-09-16T04:43:45.017756750Z" level=info msg="Forcibly stopping sandbox \"fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec\"" Sep 16 04:43:45.062894 containerd[1865]: 2025-09-16 04:43:45.040 [WARNING][6475] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" WorkloadEndpoint="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--2bmc2-eth0" Sep 16 04:43:45.062894 containerd[1865]: 2025-09-16 04:43:45.040 [INFO][6475] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" Sep 16 04:43:45.062894 containerd[1865]: 2025-09-16 04:43:45.040 [INFO][6475] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" iface="eth0" netns="" Sep 16 04:43:45.062894 containerd[1865]: 2025-09-16 04:43:45.040 [INFO][6475] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" Sep 16 04:43:45.062894 containerd[1865]: 2025-09-16 04:43:45.040 [INFO][6475] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" Sep 16 04:43:45.062894 containerd[1865]: 2025-09-16 04:43:45.054 [INFO][6482] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" HandleID="k8s-pod-network.fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--2bmc2-eth0" Sep 16 04:43:45.062894 containerd[1865]: 2025-09-16 04:43:45.054 [INFO][6482] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 16 04:43:45.062894 containerd[1865]: 2025-09-16 04:43:45.054 [INFO][6482] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 16 04:43:45.062894 containerd[1865]: 2025-09-16 04:43:45.059 [WARNING][6482] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" HandleID="k8s-pod-network.fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--2bmc2-eth0" Sep 16 04:43:45.062894 containerd[1865]: 2025-09-16 04:43:45.059 [INFO][6482] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" HandleID="k8s-pod-network.fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" Workload="ci--4459.0.0--n--404d4275b5-k8s-calico--apiserver--7b46d565d6--2bmc2-eth0" Sep 16 04:43:45.062894 containerd[1865]: 2025-09-16 04:43:45.060 [INFO][6482] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 16 04:43:45.062894 containerd[1865]: 2025-09-16 04:43:45.061 [INFO][6475] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec" Sep 16 04:43:45.063352 containerd[1865]: time="2025-09-16T04:43:45.062995727Z" level=info msg="TearDown network for sandbox \"fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec\" successfully" Sep 16 04:43:45.064864 containerd[1865]: time="2025-09-16T04:43:45.064639447Z" level=info msg="Ensure that sandbox fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec in task-service has been cleanup successfully" Sep 16 04:43:45.076731 containerd[1865]: time="2025-09-16T04:43:45.076706778Z" level=info msg="RemovePodSandbox \"fac594be40861758e6b24dac15a3376b609ab2e87e63194f42b29974ebd355ec\" returns successfully" Sep 16 04:43:54.517334 containerd[1865]: time="2025-09-16T04:43:54.517271350Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f4fee02196b9e4fd8828c2aa0a9ce23a2673f599d1b90bd498ee291154737863\" id:\"04eed08898fbb51eb0eb35e4e9b4fb42a1e8bcfd616749b9833e809bb6a20101\" pid:6524 exited_at:{seconds:1757997834 nanos:516533127}" Sep 16 04:44:01.396100 systemd[1]: Started sshd@7-10.200.20.12:22-10.200.16.10:33330.service - OpenSSH per-connection server daemon (10.200.16.10:33330). Sep 16 04:44:01.824079 sshd[6538]: Accepted publickey for core from 10.200.16.10 port 33330 ssh2: RSA SHA256:Zp261ZQ+rITQnwTNSqi9daI+o6M3rFPHdoLGEYx3TkE Sep 16 04:44:01.826175 sshd-session[6538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:44:01.831275 systemd-logind[1847]: New session 10 of user core. Sep 16 04:44:01.837729 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 16 04:44:02.228068 sshd[6541]: Connection closed by 10.200.16.10 port 33330 Sep 16 04:44:02.228848 sshd-session[6538]: pam_unix(sshd:session): session closed for user core Sep 16 04:44:02.233980 systemd[1]: sshd@7-10.200.20.12:22-10.200.16.10:33330.service: Deactivated successfully. Sep 16 04:44:02.236267 systemd[1]: session-10.scope: Deactivated successfully. Sep 16 04:44:02.239490 systemd-logind[1847]: Session 10 logged out. Waiting for processes to exit. Sep 16 04:44:02.241556 systemd-logind[1847]: Removed session 10. Sep 16 04:44:05.188912 containerd[1865]: time="2025-09-16T04:44:05.188731047Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1f08e9c24789e25c7bb9b1a3c69d1fa9201d4793e1c9de4574b81a0f834b0b1\" id:\"e5f69adf931afcce1292e274b28cb9cbefca5d8b870c19698ecbac8d94afabdb\" pid:6565 exited_at:{seconds:1757997845 nanos:188456695}" Sep 16 04:44:07.302555 systemd[1]: Started sshd@8-10.200.20.12:22-10.200.16.10:33336.service - OpenSSH per-connection server daemon (10.200.16.10:33336). Sep 16 04:44:07.727576 sshd[6576]: Accepted publickey for core from 10.200.16.10 port 33336 ssh2: RSA SHA256:Zp261ZQ+rITQnwTNSqi9daI+o6M3rFPHdoLGEYx3TkE Sep 16 04:44:07.728733 sshd-session[6576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:44:07.732680 systemd-logind[1847]: New session 11 of user core. Sep 16 04:44:07.737741 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 16 04:44:08.089804 sshd[6579]: Connection closed by 10.200.16.10 port 33336 Sep 16 04:44:08.089699 sshd-session[6576]: pam_unix(sshd:session): session closed for user core Sep 16 04:44:08.093122 systemd[1]: sshd@8-10.200.20.12:22-10.200.16.10:33336.service: Deactivated successfully. Sep 16 04:44:08.094942 systemd[1]: session-11.scope: Deactivated successfully. Sep 16 04:44:08.095541 systemd-logind[1847]: Session 11 logged out. Waiting for processes to exit. Sep 16 04:44:08.096571 systemd-logind[1847]: Removed session 11. Sep 16 04:44:10.170474 containerd[1865]: time="2025-09-16T04:44:10.170434402Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e15f6881385296be1473cd7ced5095e431d6a3fac2873178e81ee662b51c66e1\" id:\"d2b22339e39579a6bcd8e31b3dcea9c25172d751a80006f9fe0dfb5db6ef8e91\" pid:6603 exited_at:{seconds:1757997850 nanos:170162794}" Sep 16 04:44:12.066631 containerd[1865]: time="2025-09-16T04:44:12.066578494Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e15f6881385296be1473cd7ced5095e431d6a3fac2873178e81ee662b51c66e1\" id:\"c33d192356b092035dec9cd6ea694db91228b2b22799228f1a3664af2710a784\" pid:6624 exited_at:{seconds:1757997852 nanos:66420673}" Sep 16 04:44:13.182247 systemd[1]: Started sshd@9-10.200.20.12:22-10.200.16.10:42368.service - OpenSSH per-connection server daemon (10.200.16.10:42368). Sep 16 04:44:13.680953 sshd[6634]: Accepted publickey for core from 10.200.16.10 port 42368 ssh2: RSA SHA256:Zp261ZQ+rITQnwTNSqi9daI+o6M3rFPHdoLGEYx3TkE Sep 16 04:44:13.682239 sshd-session[6634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:44:13.685707 systemd-logind[1847]: New session 12 of user core. Sep 16 04:44:13.690740 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 16 04:44:14.091287 sshd[6637]: Connection closed by 10.200.16.10 port 42368 Sep 16 04:44:14.091902 sshd-session[6634]: pam_unix(sshd:session): session closed for user core Sep 16 04:44:14.095103 systemd[1]: sshd@9-10.200.20.12:22-10.200.16.10:42368.service: Deactivated successfully. Sep 16 04:44:14.096684 systemd[1]: session-12.scope: Deactivated successfully. Sep 16 04:44:14.097340 systemd-logind[1847]: Session 12 logged out. Waiting for processes to exit. Sep 16 04:44:14.098967 systemd-logind[1847]: Removed session 12. Sep 16 04:44:14.167374 systemd[1]: Started sshd@10-10.200.20.12:22-10.200.16.10:42376.service - OpenSSH per-connection server daemon (10.200.16.10:42376). Sep 16 04:44:14.594827 sshd[6650]: Accepted publickey for core from 10.200.16.10 port 42376 ssh2: RSA SHA256:Zp261ZQ+rITQnwTNSqi9daI+o6M3rFPHdoLGEYx3TkE Sep 16 04:44:14.595864 sshd-session[6650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:44:14.599984 systemd-logind[1847]: New session 13 of user core. Sep 16 04:44:14.608719 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 16 04:44:14.970939 sshd[6653]: Connection closed by 10.200.16.10 port 42376 Sep 16 04:44:14.971517 sshd-session[6650]: pam_unix(sshd:session): session closed for user core Sep 16 04:44:14.974667 systemd[1]: sshd@10-10.200.20.12:22-10.200.16.10:42376.service: Deactivated successfully. Sep 16 04:44:14.976454 systemd[1]: session-13.scope: Deactivated successfully. Sep 16 04:44:14.977366 systemd-logind[1847]: Session 13 logged out. Waiting for processes to exit. Sep 16 04:44:14.979249 systemd-logind[1847]: Removed session 13. Sep 16 04:44:15.048068 systemd[1]: Started sshd@11-10.200.20.12:22-10.200.16.10:42386.service - OpenSSH per-connection server daemon (10.200.16.10:42386). Sep 16 04:44:15.470504 sshd[6663]: Accepted publickey for core from 10.200.16.10 port 42386 ssh2: RSA SHA256:Zp261ZQ+rITQnwTNSqi9daI+o6M3rFPHdoLGEYx3TkE Sep 16 04:44:15.471594 sshd-session[6663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:44:15.475206 systemd-logind[1847]: New session 14 of user core. Sep 16 04:44:15.479735 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 16 04:44:15.816432 sshd[6666]: Connection closed by 10.200.16.10 port 42386 Sep 16 04:44:15.815913 sshd-session[6663]: pam_unix(sshd:session): session closed for user core Sep 16 04:44:15.818953 systemd-logind[1847]: Session 14 logged out. Waiting for processes to exit. Sep 16 04:44:15.819194 systemd[1]: sshd@11-10.200.20.12:22-10.200.16.10:42386.service: Deactivated successfully. Sep 16 04:44:15.821010 systemd[1]: session-14.scope: Deactivated successfully. Sep 16 04:44:15.823156 systemd-logind[1847]: Removed session 14. Sep 16 04:44:20.895497 systemd[1]: Started sshd@12-10.200.20.12:22-10.200.16.10:43554.service - OpenSSH per-connection server daemon (10.200.16.10:43554). Sep 16 04:44:21.312394 sshd[6684]: Accepted publickey for core from 10.200.16.10 port 43554 ssh2: RSA SHA256:Zp261ZQ+rITQnwTNSqi9daI+o6M3rFPHdoLGEYx3TkE Sep 16 04:44:21.313492 sshd-session[6684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:44:21.317076 systemd-logind[1847]: New session 15 of user core. Sep 16 04:44:21.321720 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 16 04:44:21.682514 sshd[6687]: Connection closed by 10.200.16.10 port 43554 Sep 16 04:44:21.683018 sshd-session[6684]: pam_unix(sshd:session): session closed for user core Sep 16 04:44:21.686326 systemd[1]: sshd@12-10.200.20.12:22-10.200.16.10:43554.service: Deactivated successfully. Sep 16 04:44:21.689214 systemd[1]: session-15.scope: Deactivated successfully. Sep 16 04:44:21.690271 systemd-logind[1847]: Session 15 logged out. Waiting for processes to exit. Sep 16 04:44:21.691949 systemd-logind[1847]: Removed session 15. Sep 16 04:44:24.519229 containerd[1865]: time="2025-09-16T04:44:24.519189241Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f4fee02196b9e4fd8828c2aa0a9ce23a2673f599d1b90bd498ee291154737863\" id:\"9e54eda54f5e3f4f496762cb1a52d562313fd453df9eca31558563742068e51d\" pid:6712 exited_at:{seconds:1757997864 nanos:518808798}" Sep 16 04:44:26.761746 systemd[1]: Started sshd@13-10.200.20.12:22-10.200.16.10:43568.service - OpenSSH per-connection server daemon (10.200.16.10:43568). Sep 16 04:44:27.177146 sshd[6724]: Accepted publickey for core from 10.200.16.10 port 43568 ssh2: RSA SHA256:Zp261ZQ+rITQnwTNSqi9daI+o6M3rFPHdoLGEYx3TkE Sep 16 04:44:27.178120 sshd-session[6724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:44:27.182753 systemd-logind[1847]: New session 16 of user core. Sep 16 04:44:27.186729 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 16 04:44:27.543429 sshd[6727]: Connection closed by 10.200.16.10 port 43568 Sep 16 04:44:27.544047 sshd-session[6724]: pam_unix(sshd:session): session closed for user core Sep 16 04:44:27.547120 systemd[1]: sshd@13-10.200.20.12:22-10.200.16.10:43568.service: Deactivated successfully. Sep 16 04:44:27.549012 systemd[1]: session-16.scope: Deactivated successfully. Sep 16 04:44:27.549753 systemd-logind[1847]: Session 16 logged out. Waiting for processes to exit. Sep 16 04:44:27.551172 systemd-logind[1847]: Removed session 16. Sep 16 04:44:32.619817 systemd[1]: Started sshd@14-10.200.20.12:22-10.200.16.10:58222.service - OpenSSH per-connection server daemon (10.200.16.10:58222). Sep 16 04:44:32.811564 containerd[1865]: time="2025-09-16T04:44:32.811524528Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1f08e9c24789e25c7bb9b1a3c69d1fa9201d4793e1c9de4574b81a0f834b0b1\" id:\"faf4bebb1c8f83c443a0cbe389d1d1cb87abd8d12a248fec72e208acbcb56a32\" pid:6753 exited_at:{seconds:1757997872 nanos:811178005}" Sep 16 04:44:33.040404 sshd[6745]: Accepted publickey for core from 10.200.16.10 port 58222 ssh2: RSA SHA256:Zp261ZQ+rITQnwTNSqi9daI+o6M3rFPHdoLGEYx3TkE Sep 16 04:44:33.041846 sshd-session[6745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:44:33.047548 systemd-logind[1847]: New session 17 of user core. Sep 16 04:44:33.051737 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 16 04:44:33.408304 sshd[6765]: Connection closed by 10.200.16.10 port 58222 Sep 16 04:44:33.408843 sshd-session[6745]: pam_unix(sshd:session): session closed for user core Sep 16 04:44:33.412413 systemd[1]: sshd@14-10.200.20.12:22-10.200.16.10:58222.service: Deactivated successfully. Sep 16 04:44:33.414197 systemd[1]: session-17.scope: Deactivated successfully. Sep 16 04:44:33.414902 systemd-logind[1847]: Session 17 logged out. Waiting for processes to exit. Sep 16 04:44:33.416289 systemd-logind[1847]: Removed session 17. Sep 16 04:44:33.493804 systemd[1]: Started sshd@15-10.200.20.12:22-10.200.16.10:58238.service - OpenSSH per-connection server daemon (10.200.16.10:58238). Sep 16 04:44:33.948650 sshd[6776]: Accepted publickey for core from 10.200.16.10 port 58238 ssh2: RSA SHA256:Zp261ZQ+rITQnwTNSqi9daI+o6M3rFPHdoLGEYx3TkE Sep 16 04:44:33.949881 sshd-session[6776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:44:33.956453 systemd-logind[1847]: New session 18 of user core. Sep 16 04:44:33.958724 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 16 04:44:34.467660 sshd[6779]: Connection closed by 10.200.16.10 port 58238 Sep 16 04:44:34.468839 sshd-session[6776]: pam_unix(sshd:session): session closed for user core Sep 16 04:44:34.472322 systemd[1]: sshd@15-10.200.20.12:22-10.200.16.10:58238.service: Deactivated successfully. Sep 16 04:44:34.474037 systemd[1]: session-18.scope: Deactivated successfully. Sep 16 04:44:34.474906 systemd-logind[1847]: Session 18 logged out. Waiting for processes to exit. Sep 16 04:44:34.476076 systemd-logind[1847]: Removed session 18. Sep 16 04:44:34.542232 systemd[1]: Started sshd@16-10.200.20.12:22-10.200.16.10:58248.service - OpenSSH per-connection server daemon (10.200.16.10:58248). Sep 16 04:44:34.961910 sshd[6788]: Accepted publickey for core from 10.200.16.10 port 58248 ssh2: RSA SHA256:Zp261ZQ+rITQnwTNSqi9daI+o6M3rFPHdoLGEYx3TkE Sep 16 04:44:34.963116 sshd-session[6788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:44:34.967211 systemd-logind[1847]: New session 19 of user core. Sep 16 04:44:34.970723 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 16 04:44:35.186999 containerd[1865]: time="2025-09-16T04:44:35.186950880Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1f08e9c24789e25c7bb9b1a3c69d1fa9201d4793e1c9de4574b81a0f834b0b1\" id:\"44522fe7d2853ef5b447994df58d43b0e01f38ac77f7f342afee77c6532bb76a\" pid:6804 exited_at:{seconds:1757997875 nanos:186101094}" Sep 16 04:44:35.853487 sshd[6791]: Connection closed by 10.200.16.10 port 58248 Sep 16 04:44:35.852578 sshd-session[6788]: pam_unix(sshd:session): session closed for user core Sep 16 04:44:35.856301 systemd[1]: sshd@16-10.200.20.12:22-10.200.16.10:58248.service: Deactivated successfully. Sep 16 04:44:35.859805 systemd[1]: session-19.scope: Deactivated successfully. Sep 16 04:44:35.861418 systemd-logind[1847]: Session 19 logged out. Waiting for processes to exit. Sep 16 04:44:35.863700 systemd-logind[1847]: Removed session 19. Sep 16 04:44:35.928830 systemd[1]: Started sshd@17-10.200.20.12:22-10.200.16.10:58264.service - OpenSSH per-connection server daemon (10.200.16.10:58264). Sep 16 04:44:36.345721 sshd[6834]: Accepted publickey for core from 10.200.16.10 port 58264 ssh2: RSA SHA256:Zp261ZQ+rITQnwTNSqi9daI+o6M3rFPHdoLGEYx3TkE Sep 16 04:44:36.348792 sshd-session[6834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:44:36.352403 systemd-logind[1847]: New session 20 of user core. Sep 16 04:44:36.360915 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 16 04:44:36.784536 sshd[6837]: Connection closed by 10.200.16.10 port 58264 Sep 16 04:44:36.785076 sshd-session[6834]: pam_unix(sshd:session): session closed for user core Sep 16 04:44:36.787944 systemd-logind[1847]: Session 20 logged out. Waiting for processes to exit. Sep 16 04:44:36.789428 systemd[1]: sshd@17-10.200.20.12:22-10.200.16.10:58264.service: Deactivated successfully. Sep 16 04:44:36.792033 systemd[1]: session-20.scope: Deactivated successfully. Sep 16 04:44:36.793320 systemd-logind[1847]: Removed session 20. Sep 16 04:44:36.862164 systemd[1]: Started sshd@18-10.200.20.12:22-10.200.16.10:58280.service - OpenSSH per-connection server daemon (10.200.16.10:58280). Sep 16 04:44:37.276394 sshd[6847]: Accepted publickey for core from 10.200.16.10 port 58280 ssh2: RSA SHA256:Zp261ZQ+rITQnwTNSqi9daI+o6M3rFPHdoLGEYx3TkE Sep 16 04:44:37.277460 sshd-session[6847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:44:37.280940 systemd-logind[1847]: New session 21 of user core. Sep 16 04:44:37.296531 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 16 04:44:37.633711 sshd[6850]: Connection closed by 10.200.16.10 port 58280 Sep 16 04:44:37.634206 sshd-session[6847]: pam_unix(sshd:session): session closed for user core Sep 16 04:44:37.637234 systemd[1]: sshd@18-10.200.20.12:22-10.200.16.10:58280.service: Deactivated successfully. Sep 16 04:44:37.639157 systemd[1]: session-21.scope: Deactivated successfully. Sep 16 04:44:37.639865 systemd-logind[1847]: Session 21 logged out. Waiting for processes to exit. Sep 16 04:44:37.641132 systemd-logind[1847]: Removed session 21. Sep 16 04:44:40.170984 containerd[1865]: time="2025-09-16T04:44:40.170943128Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e15f6881385296be1473cd7ced5095e431d6a3fac2873178e81ee662b51c66e1\" id:\"4b8fb815f15e87e8d3e74f43df0dfc621cac4a870eb956e350510363efec19e5\" pid:6873 exited_at:{seconds:1757997880 nanos:170717377}" Sep 16 04:44:42.708306 systemd[1]: Started sshd@19-10.200.20.12:22-10.200.16.10:33150.service - OpenSSH per-connection server daemon (10.200.16.10:33150). Sep 16 04:44:43.123133 sshd[6886]: Accepted publickey for core from 10.200.16.10 port 33150 ssh2: RSA SHA256:Zp261ZQ+rITQnwTNSqi9daI+o6M3rFPHdoLGEYx3TkE Sep 16 04:44:43.124286 sshd-session[6886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:44:43.128009 systemd-logind[1847]: New session 22 of user core. Sep 16 04:44:43.137742 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 16 04:44:43.485892 sshd[6889]: Connection closed by 10.200.16.10 port 33150 Sep 16 04:44:43.486479 sshd-session[6886]: pam_unix(sshd:session): session closed for user core Sep 16 04:44:43.490329 systemd[1]: sshd@19-10.200.20.12:22-10.200.16.10:33150.service: Deactivated successfully. Sep 16 04:44:43.492015 systemd[1]: session-22.scope: Deactivated successfully. Sep 16 04:44:43.492570 systemd-logind[1847]: Session 22 logged out. Waiting for processes to exit. Sep 16 04:44:43.494425 systemd-logind[1847]: Removed session 22. Sep 16 04:44:48.562806 systemd[1]: Started sshd@20-10.200.20.12:22-10.200.16.10:33166.service - OpenSSH per-connection server daemon (10.200.16.10:33166). Sep 16 04:44:48.973265 sshd[6902]: Accepted publickey for core from 10.200.16.10 port 33166 ssh2: RSA SHA256:Zp261ZQ+rITQnwTNSqi9daI+o6M3rFPHdoLGEYx3TkE Sep 16 04:44:48.974321 sshd-session[6902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:44:48.978343 systemd-logind[1847]: New session 23 of user core. Sep 16 04:44:48.982731 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 16 04:44:49.335218 sshd[6905]: Connection closed by 10.200.16.10 port 33166 Sep 16 04:44:49.335055 sshd-session[6902]: pam_unix(sshd:session): session closed for user core Sep 16 04:44:49.338874 systemd-logind[1847]: Session 23 logged out. Waiting for processes to exit. Sep 16 04:44:49.339772 systemd[1]: sshd@20-10.200.20.12:22-10.200.16.10:33166.service: Deactivated successfully. Sep 16 04:44:49.342976 systemd[1]: session-23.scope: Deactivated successfully. Sep 16 04:44:49.344421 systemd-logind[1847]: Removed session 23. Sep 16 04:44:54.416175 systemd[1]: Started sshd@21-10.200.20.12:22-10.200.16.10:47664.service - OpenSSH per-connection server daemon (10.200.16.10:47664). Sep 16 04:44:54.588434 containerd[1865]: time="2025-09-16T04:44:54.588385701Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f4fee02196b9e4fd8828c2aa0a9ce23a2673f599d1b90bd498ee291154737863\" id:\"6cc235003c210c350ed19ec12aca26aeae19f9f6e0f9c57aadc16bacfbb9b1ab\" pid:6935 exited_at:{seconds:1757997894 nanos:587932367}" Sep 16 04:44:54.835596 sshd[6919]: Accepted publickey for core from 10.200.16.10 port 47664 ssh2: RSA SHA256:Zp261ZQ+rITQnwTNSqi9daI+o6M3rFPHdoLGEYx3TkE Sep 16 04:44:54.837530 sshd-session[6919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:44:54.843305 systemd-logind[1847]: New session 24 of user core. Sep 16 04:44:54.847801 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 16 04:44:55.213541 sshd[6946]: Connection closed by 10.200.16.10 port 47664 Sep 16 04:44:55.214813 sshd-session[6919]: pam_unix(sshd:session): session closed for user core Sep 16 04:44:55.218251 systemd-logind[1847]: Session 24 logged out. Waiting for processes to exit. Sep 16 04:44:55.218786 systemd[1]: sshd@21-10.200.20.12:22-10.200.16.10:47664.service: Deactivated successfully. Sep 16 04:44:55.221167 systemd[1]: session-24.scope: Deactivated successfully. Sep 16 04:44:55.223208 systemd-logind[1847]: Removed session 24. Sep 16 04:45:00.303382 systemd[1]: Started sshd@22-10.200.20.12:22-10.200.16.10:38962.service - OpenSSH per-connection server daemon (10.200.16.10:38962). Sep 16 04:45:00.796787 sshd[6964]: Accepted publickey for core from 10.200.16.10 port 38962 ssh2: RSA SHA256:Zp261ZQ+rITQnwTNSqi9daI+o6M3rFPHdoLGEYx3TkE Sep 16 04:45:00.798265 sshd-session[6964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:45:00.801717 systemd-logind[1847]: New session 25 of user core. Sep 16 04:45:00.809713 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 16 04:45:01.197640 sshd[6967]: Connection closed by 10.200.16.10 port 38962 Sep 16 04:45:01.198415 sshd-session[6964]: pam_unix(sshd:session): session closed for user core Sep 16 04:45:01.201468 systemd[1]: sshd@22-10.200.20.12:22-10.200.16.10:38962.service: Deactivated successfully. Sep 16 04:45:01.203059 systemd[1]: session-25.scope: Deactivated successfully. Sep 16 04:45:01.203868 systemd-logind[1847]: Session 25 logged out. Waiting for processes to exit. Sep 16 04:45:01.205284 systemd-logind[1847]: Removed session 25. Sep 16 04:45:05.183582 containerd[1865]: time="2025-09-16T04:45:05.183432133Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1f08e9c24789e25c7bb9b1a3c69d1fa9201d4793e1c9de4574b81a0f834b0b1\" id:\"8ac0beadef648c5eb07d25af01a33d657d0b8fea91c1b51b23a90c0cf59b96d5\" pid:6990 exited_at:{seconds:1757997905 nanos:182989631}" Sep 16 04:45:06.281691 systemd[1]: Started sshd@23-10.200.20.12:22-10.200.16.10:38964.service - OpenSSH per-connection server daemon (10.200.16.10:38964). Sep 16 04:45:06.700742 sshd[7001]: Accepted publickey for core from 10.200.16.10 port 38964 ssh2: RSA SHA256:Zp261ZQ+rITQnwTNSqi9daI+o6M3rFPHdoLGEYx3TkE Sep 16 04:45:06.702146 sshd-session[7001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:45:06.705657 systemd-logind[1847]: New session 26 of user core. Sep 16 04:45:06.712815 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 16 04:45:07.047921 sshd[7004]: Connection closed by 10.200.16.10 port 38964 Sep 16 04:45:07.048596 sshd-session[7001]: pam_unix(sshd:session): session closed for user core Sep 16 04:45:07.053938 systemd[1]: sshd@23-10.200.20.12:22-10.200.16.10:38964.service: Deactivated successfully. Sep 16 04:45:07.055590 systemd[1]: session-26.scope: Deactivated successfully. Sep 16 04:45:07.057415 systemd-logind[1847]: Session 26 logged out. Waiting for processes to exit. Sep 16 04:45:07.059052 systemd-logind[1847]: Removed session 26. Sep 16 04:45:10.169916 containerd[1865]: time="2025-09-16T04:45:10.169878121Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e15f6881385296be1473cd7ced5095e431d6a3fac2873178e81ee662b51c66e1\" id:\"c1c23de3211676fe31129667f41052616aca06980d34cac21efd51cb4a5b2fe6\" pid:7027 exited_at:{seconds:1757997910 nanos:169674859}" Sep 16 04:45:12.067291 containerd[1865]: time="2025-09-16T04:45:12.067215571Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e15f6881385296be1473cd7ced5095e431d6a3fac2873178e81ee662b51c66e1\" id:\"465cac69fa22234b0581a1963d198dd1c0b08050955a9f18e1576fb518d02d57\" pid:7049 exited_at:{seconds:1757997912 nanos:67010228}" Sep 16 04:45:12.129815 systemd[1]: Started sshd@24-10.200.20.12:22-10.200.16.10:40716.service - OpenSSH per-connection server daemon (10.200.16.10:40716). Sep 16 04:45:12.584320 sshd[7059]: Accepted publickey for core from 10.200.16.10 port 40716 ssh2: RSA SHA256:Zp261ZQ+rITQnwTNSqi9daI+o6M3rFPHdoLGEYx3TkE Sep 16 04:45:12.585056 sshd-session[7059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:45:12.589175 systemd-logind[1847]: New session 27 of user core. Sep 16 04:45:12.597722 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 16 04:45:12.947959 sshd[7062]: Connection closed by 10.200.16.10 port 40716 Sep 16 04:45:12.948543 sshd-session[7059]: pam_unix(sshd:session): session closed for user core Sep 16 04:45:12.951413 systemd[1]: sshd@24-10.200.20.12:22-10.200.16.10:40716.service: Deactivated successfully. Sep 16 04:45:12.953118 systemd[1]: session-27.scope: Deactivated successfully. Sep 16 04:45:12.953830 systemd-logind[1847]: Session 27 logged out. Waiting for processes to exit. Sep 16 04:45:12.954855 systemd-logind[1847]: Removed session 27.