Jul 9 23:45:45.060664 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jul 9 23:45:45.060681 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed Jul 9 22:19:33 -00 2025 Jul 9 23:45:45.060687 kernel: KASLR enabled Jul 9 23:45:45.060691 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 9 23:45:45.060696 kernel: printk: legacy bootconsole [pl11] enabled Jul 9 23:45:45.060700 kernel: efi: EFI v2.7 by EDK II Jul 9 23:45:45.060705 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Jul 9 23:45:45.060708 kernel: random: crng init done Jul 9 23:45:45.060712 kernel: secureboot: Secure boot disabled Jul 9 23:45:45.060716 kernel: ACPI: Early table checksum verification disabled Jul 9 23:45:45.060720 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jul 9 23:45:45.060724 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:45:45.060727 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:45:45.060732 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 9 23:45:45.060737 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:45:45.060741 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:45:45.060746 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:45:45.060751 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:45:45.060755 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:45:45.060759 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:45:45.060763 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 9 23:45:45.060767 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 9 23:45:45.060772 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 9 23:45:45.060776 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 9 23:45:45.060780 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jul 9 23:45:45.060784 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jul 9 23:45:45.060788 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jul 9 23:45:45.060792 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jul 9 23:45:45.060796 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jul 9 23:45:45.060801 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jul 9 23:45:45.060805 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jul 9 23:45:45.060809 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jul 9 23:45:45.060813 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jul 9 23:45:45.060817 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jul 9 23:45:45.060822 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jul 9 23:45:45.060826 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jul 9 23:45:45.060830 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jul 9 23:45:45.060834 kernel: NODE_DATA(0) allocated [mem 0x1bf7fddc0-0x1bf804fff] Jul 9 23:45:45.060838 kernel: Zone ranges: Jul 9 23:45:45.060842 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 9 23:45:45.060849 kernel: DMA32 empty Jul 9 23:45:45.060853 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 9 23:45:45.060858 kernel: Device empty Jul 9 23:45:45.060862 kernel: Movable zone start for each node Jul 9 23:45:45.060866 kernel: Early memory node ranges Jul 9 23:45:45.060871 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 9 23:45:45.060876 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jul 9 23:45:45.060880 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jul 9 23:45:45.060884 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jul 9 23:45:45.060888 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jul 9 23:45:45.060893 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jul 9 23:45:45.060897 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jul 9 23:45:45.060901 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jul 9 23:45:45.060905 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 9 23:45:45.060910 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 9 23:45:45.060914 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 9 23:45:45.060918 kernel: psci: probing for conduit method from ACPI. Jul 9 23:45:45.060923 kernel: psci: PSCIv1.1 detected in firmware. Jul 9 23:45:45.060928 kernel: psci: Using standard PSCI v0.2 function IDs Jul 9 23:45:45.060932 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 9 23:45:45.060936 kernel: psci: SMC Calling Convention v1.4 Jul 9 23:45:45.060941 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 9 23:45:45.060945 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 9 23:45:45.060949 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 9 23:45:45.060953 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 9 23:45:45.060958 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 9 23:45:45.060962 kernel: Detected PIPT I-cache on CPU0 Jul 9 23:45:45.060967 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jul 9 23:45:45.060972 kernel: CPU features: detected: GIC system register CPU interface Jul 9 23:45:45.060977 kernel: CPU features: detected: Spectre-v4 Jul 9 23:45:45.060981 kernel: CPU features: detected: Spectre-BHB Jul 9 23:45:45.060985 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 9 23:45:45.060990 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 9 23:45:45.060994 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jul 9 23:45:45.060998 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 9 23:45:45.061003 kernel: alternatives: applying boot alternatives Jul 9 23:45:45.061008 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=da23c3aa7de24c290e5e9aff0a0fccd6a322ecaa9bbfc71c29b2f39446459116 Jul 9 23:45:45.061012 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 9 23:45:45.061017 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 9 23:45:45.061022 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 9 23:45:45.061026 kernel: Fallback order for Node 0: 0 Jul 9 23:45:45.061031 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jul 9 23:45:45.061035 kernel: Policy zone: Normal Jul 9 23:45:45.061039 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 9 23:45:45.061043 kernel: software IO TLB: area num 2. Jul 9 23:45:45.061048 kernel: software IO TLB: mapped [mem 0x000000003a460000-0x000000003e460000] (64MB) Jul 9 23:45:45.061052 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 9 23:45:45.061056 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 9 23:45:45.061061 kernel: rcu: RCU event tracing is enabled. Jul 9 23:45:45.061066 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 9 23:45:45.061071 kernel: Trampoline variant of Tasks RCU enabled. Jul 9 23:45:45.061075 kernel: Tracing variant of Tasks RCU enabled. Jul 9 23:45:45.061080 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 9 23:45:45.061084 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 9 23:45:45.061088 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 9 23:45:45.061093 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 9 23:45:45.061097 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 9 23:45:45.061102 kernel: GICv3: 960 SPIs implemented Jul 9 23:45:45.061106 kernel: GICv3: 0 Extended SPIs implemented Jul 9 23:45:45.061110 kernel: Root IRQ handler: gic_handle_irq Jul 9 23:45:45.061114 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jul 9 23:45:45.061119 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jul 9 23:45:45.061124 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 9 23:45:45.061128 kernel: ITS: No ITS available, not enabling LPIs Jul 9 23:45:45.061132 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 9 23:45:45.061137 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jul 9 23:45:45.061141 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 9 23:45:45.061146 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jul 9 23:45:45.061150 kernel: Console: colour dummy device 80x25 Jul 9 23:45:45.061155 kernel: printk: legacy console [tty1] enabled Jul 9 23:45:45.061159 kernel: ACPI: Core revision 20240827 Jul 9 23:45:45.061164 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jul 9 23:45:45.061169 kernel: pid_max: default: 32768 minimum: 301 Jul 9 23:45:45.061173 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 9 23:45:45.061178 kernel: landlock: Up and running. Jul 9 23:45:45.061182 kernel: SELinux: Initializing. Jul 9 23:45:45.061187 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 23:45:45.061191 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 23:45:45.061199 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 Jul 9 23:45:45.061204 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Jul 9 23:45:45.061209 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 9 23:45:45.061214 kernel: rcu: Hierarchical SRCU implementation. Jul 9 23:45:45.061219 kernel: rcu: Max phase no-delay instances is 400. Jul 9 23:45:45.061223 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 9 23:45:45.061229 kernel: Remapping and enabling EFI services. Jul 9 23:45:45.061234 kernel: smp: Bringing up secondary CPUs ... Jul 9 23:45:45.061238 kernel: Detected PIPT I-cache on CPU1 Jul 9 23:45:45.061243 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 9 23:45:45.061248 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jul 9 23:45:45.061253 kernel: smp: Brought up 1 node, 2 CPUs Jul 9 23:45:45.061258 kernel: SMP: Total of 2 processors activated. Jul 9 23:45:45.061263 kernel: CPU: All CPU(s) started at EL1 Jul 9 23:45:45.061267 kernel: CPU features: detected: 32-bit EL0 Support Jul 9 23:45:45.061272 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 9 23:45:45.061277 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 9 23:45:45.061282 kernel: CPU features: detected: Common not Private translations Jul 9 23:45:45.061286 kernel: CPU features: detected: CRC32 instructions Jul 9 23:45:45.061291 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jul 9 23:45:45.061297 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 9 23:45:45.061301 kernel: CPU features: detected: LSE atomic instructions Jul 9 23:45:45.061306 kernel: CPU features: detected: Privileged Access Never Jul 9 23:45:45.061311 kernel: CPU features: detected: Speculation barrier (SB) Jul 9 23:45:45.061315 kernel: CPU features: detected: TLB range maintenance instructions Jul 9 23:45:45.061320 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 9 23:45:45.061325 kernel: CPU features: detected: Scalable Vector Extension Jul 9 23:45:45.061329 kernel: alternatives: applying system-wide alternatives Jul 9 23:45:45.061334 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jul 9 23:45:45.061340 kernel: SVE: maximum available vector length 16 bytes per vector Jul 9 23:45:45.061344 kernel: SVE: default vector length 16 bytes per vector Jul 9 23:45:45.061349 kernel: Memory: 3975544K/4194160K available (11136K kernel code, 2428K rwdata, 9032K rodata, 39488K init, 1035K bss, 213816K reserved, 0K cma-reserved) Jul 9 23:45:45.061354 kernel: devtmpfs: initialized Jul 9 23:45:45.061367 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 9 23:45:45.061372 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 9 23:45:45.061377 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 9 23:45:45.061381 kernel: 0 pages in range for non-PLT usage Jul 9 23:45:45.061386 kernel: 508448 pages in range for PLT usage Jul 9 23:45:45.061392 kernel: pinctrl core: initialized pinctrl subsystem Jul 9 23:45:45.061396 kernel: SMBIOS 3.1.0 present. Jul 9 23:45:45.061401 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 9 23:45:45.061406 kernel: DMI: Memory slots populated: 2/2 Jul 9 23:45:45.061411 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 9 23:45:45.061416 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 9 23:45:45.061420 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 9 23:45:45.061425 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 9 23:45:45.061430 kernel: audit: initializing netlink subsys (disabled) Jul 9 23:45:45.061435 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jul 9 23:45:45.061440 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 9 23:45:45.061445 kernel: cpuidle: using governor menu Jul 9 23:45:45.061450 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 9 23:45:45.061454 kernel: ASID allocator initialised with 32768 entries Jul 9 23:45:45.061459 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 9 23:45:45.061464 kernel: Serial: AMBA PL011 UART driver Jul 9 23:45:45.061468 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 9 23:45:45.061473 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 9 23:45:45.061478 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 9 23:45:45.061483 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 9 23:45:45.061488 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 9 23:45:45.061492 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 9 23:45:45.061497 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 9 23:45:45.061502 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 9 23:45:45.061506 kernel: ACPI: Added _OSI(Module Device) Jul 9 23:45:45.061511 kernel: ACPI: Added _OSI(Processor Device) Jul 9 23:45:45.061516 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 9 23:45:45.061521 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 9 23:45:45.061526 kernel: ACPI: Interpreter enabled Jul 9 23:45:45.061530 kernel: ACPI: Using GIC for interrupt routing Jul 9 23:45:45.061535 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 9 23:45:45.061540 kernel: printk: legacy console [ttyAMA0] enabled Jul 9 23:45:45.061544 kernel: printk: legacy bootconsole [pl11] disabled Jul 9 23:45:45.061549 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 9 23:45:45.061554 kernel: ACPI: CPU0 has been hot-added Jul 9 23:45:45.061558 kernel: ACPI: CPU1 has been hot-added Jul 9 23:45:45.061564 kernel: iommu: Default domain type: Translated Jul 9 23:45:45.061568 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 9 23:45:45.061573 kernel: efivars: Registered efivars operations Jul 9 23:45:45.061578 kernel: vgaarb: loaded Jul 9 23:45:45.061582 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 9 23:45:45.061587 kernel: VFS: Disk quotas dquot_6.6.0 Jul 9 23:45:45.061592 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 9 23:45:45.061597 kernel: pnp: PnP ACPI init Jul 9 23:45:45.061601 kernel: pnp: PnP ACPI: found 0 devices Jul 9 23:45:45.061607 kernel: NET: Registered PF_INET protocol family Jul 9 23:45:45.061611 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 9 23:45:45.061616 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 9 23:45:45.061621 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 9 23:45:45.061626 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 9 23:45:45.061630 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 9 23:45:45.061635 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 9 23:45:45.061640 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 23:45:45.061645 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 23:45:45.061650 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 9 23:45:45.061655 kernel: PCI: CLS 0 bytes, default 64 Jul 9 23:45:45.061659 kernel: kvm [1]: HYP mode not available Jul 9 23:45:45.061664 kernel: Initialise system trusted keyrings Jul 9 23:45:45.061668 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 9 23:45:45.061673 kernel: Key type asymmetric registered Jul 9 23:45:45.061678 kernel: Asymmetric key parser 'x509' registered Jul 9 23:45:45.061682 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 9 23:45:45.061687 kernel: io scheduler mq-deadline registered Jul 9 23:45:45.061692 kernel: io scheduler kyber registered Jul 9 23:45:45.061697 kernel: io scheduler bfq registered Jul 9 23:45:45.061702 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 9 23:45:45.061706 kernel: thunder_xcv, ver 1.0 Jul 9 23:45:45.061711 kernel: thunder_bgx, ver 1.0 Jul 9 23:45:45.061715 kernel: nicpf, ver 1.0 Jul 9 23:45:45.061720 kernel: nicvf, ver 1.0 Jul 9 23:45:45.061818 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 9 23:45:45.061871 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-09T23:45:44 UTC (1752104744) Jul 9 23:45:45.061878 kernel: efifb: probing for efifb Jul 9 23:45:45.061882 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 9 23:45:45.061887 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 9 23:45:45.061892 kernel: efifb: scrolling: redraw Jul 9 23:45:45.061896 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 9 23:45:45.061901 kernel: Console: switching to colour frame buffer device 128x48 Jul 9 23:45:45.061906 kernel: fb0: EFI VGA frame buffer device Jul 9 23:45:45.061910 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 9 23:45:45.061916 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 9 23:45:45.061921 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 9 23:45:45.061926 kernel: watchdog: NMI not fully supported Jul 9 23:45:45.061930 kernel: watchdog: Hard watchdog permanently disabled Jul 9 23:45:45.061935 kernel: NET: Registered PF_INET6 protocol family Jul 9 23:45:45.061940 kernel: Segment Routing with IPv6 Jul 9 23:45:45.061944 kernel: In-situ OAM (IOAM) with IPv6 Jul 9 23:45:45.061949 kernel: NET: Registered PF_PACKET protocol family Jul 9 23:45:45.061954 kernel: Key type dns_resolver registered Jul 9 23:45:45.061959 kernel: registered taskstats version 1 Jul 9 23:45:45.061964 kernel: Loading compiled-in X.509 certificates Jul 9 23:45:45.061969 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 11eff9deb028731c4f89f27f6fac8d1c08902e5a' Jul 9 23:45:45.061973 kernel: Demotion targets for Node 0: null Jul 9 23:45:45.061978 kernel: Key type .fscrypt registered Jul 9 23:45:45.061983 kernel: Key type fscrypt-provisioning registered Jul 9 23:45:45.061987 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 9 23:45:45.061992 kernel: ima: Allocated hash algorithm: sha1 Jul 9 23:45:45.061997 kernel: ima: No architecture policies found Jul 9 23:45:45.062002 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 9 23:45:45.062007 kernel: clk: Disabling unused clocks Jul 9 23:45:45.062012 kernel: PM: genpd: Disabling unused power domains Jul 9 23:45:45.062016 kernel: Warning: unable to open an initial console. Jul 9 23:45:45.062021 kernel: Freeing unused kernel memory: 39488K Jul 9 23:45:45.062026 kernel: Run /init as init process Jul 9 23:45:45.062030 kernel: with arguments: Jul 9 23:45:45.062035 kernel: /init Jul 9 23:45:45.062039 kernel: with environment: Jul 9 23:45:45.062045 kernel: HOME=/ Jul 9 23:45:45.062049 kernel: TERM=linux Jul 9 23:45:45.062054 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 9 23:45:45.062059 systemd[1]: Successfully made /usr/ read-only. Jul 9 23:45:45.062066 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 23:45:45.062072 systemd[1]: Detected virtualization microsoft. Jul 9 23:45:45.062076 systemd[1]: Detected architecture arm64. Jul 9 23:45:45.062082 systemd[1]: Running in initrd. Jul 9 23:45:45.062087 systemd[1]: No hostname configured, using default hostname. Jul 9 23:45:45.062093 systemd[1]: Hostname set to . Jul 9 23:45:45.062097 systemd[1]: Initializing machine ID from random generator. Jul 9 23:45:45.062102 systemd[1]: Queued start job for default target initrd.target. Jul 9 23:45:45.062108 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:45:45.062113 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:45:45.062118 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 9 23:45:45.062124 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 23:45:45.062130 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 9 23:45:45.062135 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 9 23:45:45.062141 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 9 23:45:45.062146 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 9 23:45:45.062151 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:45:45.062157 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:45:45.062163 systemd[1]: Reached target paths.target - Path Units. Jul 9 23:45:45.062168 systemd[1]: Reached target slices.target - Slice Units. Jul 9 23:45:45.062173 systemd[1]: Reached target swap.target - Swaps. Jul 9 23:45:45.062178 systemd[1]: Reached target timers.target - Timer Units. Jul 9 23:45:45.062183 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 23:45:45.062188 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 23:45:45.062193 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 9 23:45:45.062198 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 9 23:45:45.062204 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:45:45.062209 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 23:45:45.062214 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:45:45.062220 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 23:45:45.062225 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 9 23:45:45.062230 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 23:45:45.062235 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 9 23:45:45.062240 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 9 23:45:45.062245 systemd[1]: Starting systemd-fsck-usr.service... Jul 9 23:45:45.062251 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 23:45:45.062257 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 23:45:45.062271 systemd-journald[224]: Collecting audit messages is disabled. Jul 9 23:45:45.062285 systemd-journald[224]: Journal started Jul 9 23:45:45.062299 systemd-journald[224]: Runtime Journal (/run/log/journal/3ea1bf6cc81245b799ba2b512647cd21) is 8M, max 78.5M, 70.5M free. Jul 9 23:45:45.076039 systemd-modules-load[226]: Inserted module 'overlay' Jul 9 23:45:45.080607 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:45:45.097636 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 9 23:45:45.097668 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 23:45:45.101456 kernel: Bridge firewalling registered Jul 9 23:45:45.103737 systemd-modules-load[226]: Inserted module 'br_netfilter' Jul 9 23:45:45.107342 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 9 23:45:45.116460 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:45:45.121503 systemd[1]: Finished systemd-fsck-usr.service. Jul 9 23:45:45.132351 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 23:45:45.141108 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:45:45.147807 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 9 23:45:45.168487 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:45:45.174368 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 23:45:45.185979 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 23:45:45.198290 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:45:45.213125 systemd-tmpfiles[245]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 9 23:45:45.213380 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 23:45:45.226067 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 23:45:45.235700 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:45:45.247056 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 9 23:45:45.263202 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 23:45:45.271348 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 23:45:45.288740 dracut-cmdline[260]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=da23c3aa7de24c290e5e9aff0a0fccd6a322ecaa9bbfc71c29b2f39446459116 Jul 9 23:45:45.312469 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:45:45.336205 systemd-resolved[261]: Positive Trust Anchors: Jul 9 23:45:45.346351 kernel: SCSI subsystem initialized Jul 9 23:45:45.339152 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 23:45:45.354645 kernel: Loading iSCSI transport class v2.0-870. Jul 9 23:45:45.339174 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 23:45:45.340885 systemd-resolved[261]: Defaulting to hostname 'linux'. Jul 9 23:45:45.391900 kernel: iscsi: registered transport (tcp) Jul 9 23:45:45.341574 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 23:45:45.381133 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:45:45.405557 kernel: iscsi: registered transport (qla4xxx) Jul 9 23:45:45.405568 kernel: QLogic iSCSI HBA Driver Jul 9 23:45:45.417744 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 23:45:45.436346 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 23:45:45.448001 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 23:45:45.488606 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 9 23:45:45.495477 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 9 23:45:45.556375 kernel: raid6: neonx8 gen() 18552 MB/s Jul 9 23:45:45.575365 kernel: raid6: neonx4 gen() 18571 MB/s Jul 9 23:45:45.594363 kernel: raid6: neonx2 gen() 17094 MB/s Jul 9 23:45:45.614432 kernel: raid6: neonx1 gen() 15016 MB/s Jul 9 23:45:45.634439 kernel: raid6: int64x8 gen() 10535 MB/s Jul 9 23:45:45.653441 kernel: raid6: int64x4 gen() 10612 MB/s Jul 9 23:45:45.673364 kernel: raid6: int64x2 gen() 8994 MB/s Jul 9 23:45:45.694592 kernel: raid6: int64x1 gen() 7015 MB/s Jul 9 23:45:45.694599 kernel: raid6: using algorithm neonx4 gen() 18571 MB/s Jul 9 23:45:45.716586 kernel: raid6: .... xor() 15150 MB/s, rmw enabled Jul 9 23:45:45.716632 kernel: raid6: using neon recovery algorithm Jul 9 23:45:45.724702 kernel: xor: measuring software checksum speed Jul 9 23:45:45.724736 kernel: 8regs : 28611 MB/sec Jul 9 23:45:45.727884 kernel: 32regs : 28802 MB/sec Jul 9 23:45:45.730314 kernel: arm64_neon : 37597 MB/sec Jul 9 23:45:45.733120 kernel: xor: using function: arm64_neon (37597 MB/sec) Jul 9 23:45:45.770374 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 9 23:45:45.776387 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 9 23:45:45.785464 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:45:45.816093 systemd-udevd[474]: Using default interface naming scheme 'v255'. Jul 9 23:45:45.818979 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:45:45.832481 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 9 23:45:45.857383 dracut-pre-trigger[490]: rd.md=0: removing MD RAID activation Jul 9 23:45:45.875900 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 23:45:45.885935 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 23:45:45.924996 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:45:45.936693 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 9 23:45:45.994380 kernel: hv_vmbus: Vmbus version:5.3 Jul 9 23:45:45.995805 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 23:45:46.000512 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:45:46.040935 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 9 23:45:46.040955 kernel: hv_vmbus: registering driver hid_hyperv Jul 9 23:45:46.040962 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 9 23:45:46.040969 kernel: hv_vmbus: registering driver hv_storvsc Jul 9 23:45:46.040978 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 9 23:45:46.040986 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 9 23:45:46.040993 kernel: hv_vmbus: registering driver hv_netvsc Jul 9 23:45:46.040999 kernel: PTP clock support registered Jul 9 23:45:46.022449 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:45:46.075019 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 9 23:45:46.075035 kernel: scsi host0: storvsc_host_t Jul 9 23:45:46.075171 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 9 23:45:46.075236 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 9 23:45:46.075252 kernel: scsi host1: storvsc_host_t Jul 9 23:45:46.075315 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jul 9 23:45:46.085575 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:45:46.104135 kernel: hv_utils: Registering HyperV Utility Driver Jul 9 23:45:46.104152 kernel: hv_vmbus: registering driver hv_utils Jul 9 23:45:46.097955 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 9 23:45:46.028113 kernel: hv_utils: Heartbeat IC version 3.0 Jul 9 23:45:46.029034 kernel: hv_utils: Shutdown IC version 3.2 Jul 9 23:45:46.029045 kernel: hv_utils: TimeSync IC version 4.0 Jul 9 23:45:46.029050 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 9 23:45:46.032181 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 9 23:45:46.032289 systemd-journald[224]: Time jumped backwards, rotating. Jul 9 23:45:46.032323 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 9 23:45:46.099005 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 23:45:46.063599 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 9 23:45:46.063717 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 9 23:45:46.063785 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#262 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 9 23:45:46.063852 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#269 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 9 23:45:46.099072 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:45:46.113683 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:45:46.079116 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 9 23:45:46.079135 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 9 23:45:46.023989 systemd-resolved[261]: Clock change detected. Flushing caches. Jul 9 23:45:46.086384 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 9 23:45:46.088302 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 9 23:45:46.091227 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 9 23:45:46.096731 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:45:46.116187 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#299 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 9 23:45:46.135940 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#274 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 9 23:45:46.272042 kernel: hv_netvsc 002248be-674e-0022-48be-674e002248be eth0: VF slot 1 added Jul 9 23:45:46.282674 kernel: hv_vmbus: registering driver hv_pci Jul 9 23:45:46.282704 kernel: hv_pci c89bbe2e-bab0-4b8f-aade-177740e2abb5: PCI VMBus probing: Using version 0x10004 Jul 9 23:45:46.292921 kernel: hv_pci c89bbe2e-bab0-4b8f-aade-177740e2abb5: PCI host bridge to bus bab0:00 Jul 9 23:45:46.293046 kernel: pci_bus bab0:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 9 23:45:46.297723 kernel: pci_bus bab0:00: No busn resource found for root bus, will use [bus 00-ff] Jul 9 23:45:46.372685 kernel: pci bab0:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jul 9 23:45:46.379184 kernel: pci bab0:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 9 23:45:46.383219 kernel: pci bab0:00:02.0: enabling Extended Tags Jul 9 23:45:46.397232 kernel: pci bab0:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at bab0:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jul 9 23:45:46.405756 kernel: pci_bus bab0:00: busn_res: [bus 00-ff] end is updated to 00 Jul 9 23:45:46.405895 kernel: pci bab0:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jul 9 23:45:46.460186 kernel: mlx5_core bab0:00:02.0: enabling device (0000 -> 0002) Jul 9 23:45:46.468245 kernel: mlx5_core bab0:00:02.0: PTM is not supported by PCIe Jul 9 23:45:46.468396 kernel: mlx5_core bab0:00:02.0: firmware version: 16.30.5006 Jul 9 23:45:46.636467 kernel: hv_netvsc 002248be-674e-0022-48be-674e002248be eth0: VF registering: eth1 Jul 9 23:45:46.636646 kernel: mlx5_core bab0:00:02.0 eth1: joined to eth0 Jul 9 23:45:46.641947 kernel: mlx5_core bab0:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 9 23:45:46.651289 kernel: mlx5_core bab0:00:02.0 enP47792s1: renamed from eth1 Jul 9 23:45:46.775785 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 9 23:45:46.816635 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 9 23:45:46.862663 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 9 23:45:46.886085 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 9 23:45:46.891513 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 9 23:45:46.903067 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 9 23:45:46.913851 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 23:45:46.922912 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:45:46.932344 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 23:45:46.946338 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 9 23:45:46.961719 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 9 23:45:46.990415 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#259 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 9 23:45:46.986045 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 9 23:45:47.000293 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 9 23:45:48.012065 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#296 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jul 9 23:45:48.024145 disk-uuid[668]: The operation has completed successfully. Jul 9 23:45:48.027873 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 9 23:45:48.081706 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 9 23:45:48.081796 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 9 23:45:48.105382 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 9 23:45:48.122069 sh[828]: Success Jul 9 23:45:48.154974 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 9 23:45:48.155016 kernel: device-mapper: uevent: version 1.0.3 Jul 9 23:45:48.161194 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 9 23:45:48.169219 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 9 23:45:48.359685 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 9 23:45:48.369677 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 9 23:45:48.376810 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 9 23:45:48.401988 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 9 23:45:48.402021 kernel: BTRFS: device fsid 0f8170d9-c2a5-4c49-82bc-4e538bfc9b9b devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (846) Jul 9 23:45:48.412502 kernel: BTRFS info (device dm-0): first mount of filesystem 0f8170d9-c2a5-4c49-82bc-4e538bfc9b9b Jul 9 23:45:48.412529 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:45:48.415747 kernel: BTRFS info (device dm-0): using free-space-tree Jul 9 23:45:48.709895 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 9 23:45:48.713845 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 9 23:45:48.721114 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 9 23:45:48.721854 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 9 23:45:48.741684 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 9 23:45:48.770871 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (883) Jul 9 23:45:48.770906 kernel: BTRFS info (device sda6): first mount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:45:48.775224 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:45:48.778686 kernel: BTRFS info (device sda6): using free-space-tree Jul 9 23:45:48.802204 kernel: BTRFS info (device sda6): last unmount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:45:48.802998 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 9 23:45:48.808256 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 9 23:45:48.852289 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 23:45:48.862826 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 23:45:48.890652 systemd-networkd[1015]: lo: Link UP Jul 9 23:45:48.890660 systemd-networkd[1015]: lo: Gained carrier Jul 9 23:45:48.891818 systemd-networkd[1015]: Enumeration completed Jul 9 23:45:48.892421 systemd-networkd[1015]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:45:48.892424 systemd-networkd[1015]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 23:45:48.893249 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 23:45:48.897621 systemd[1]: Reached target network.target - Network. Jul 9 23:45:48.964190 kernel: mlx5_core bab0:00:02.0 enP47792s1: Link up Jul 9 23:45:48.996257 kernel: hv_netvsc 002248be-674e-0022-48be-674e002248be eth0: Data path switched to VF: enP47792s1 Jul 9 23:45:48.996958 systemd-networkd[1015]: enP47792s1: Link UP Jul 9 23:45:48.997009 systemd-networkd[1015]: eth0: Link UP Jul 9 23:45:48.997092 systemd-networkd[1015]: eth0: Gained carrier Jul 9 23:45:48.997098 systemd-networkd[1015]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:45:49.013308 systemd-networkd[1015]: enP47792s1: Gained carrier Jul 9 23:45:49.026199 systemd-networkd[1015]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 9 23:45:49.860030 ignition[953]: Ignition 2.21.0 Jul 9 23:45:49.860045 ignition[953]: Stage: fetch-offline Jul 9 23:45:49.863478 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 23:45:49.860283 ignition[953]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:45:49.870002 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 9 23:45:49.860290 ignition[953]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 9 23:45:49.860527 ignition[953]: parsed url from cmdline: "" Jul 9 23:45:49.860537 ignition[953]: no config URL provided Jul 9 23:45:49.860542 ignition[953]: reading system config file "/usr/lib/ignition/user.ign" Jul 9 23:45:49.860551 ignition[953]: no config at "/usr/lib/ignition/user.ign" Jul 9 23:45:49.860556 ignition[953]: failed to fetch config: resource requires networking Jul 9 23:45:49.860807 ignition[953]: Ignition finished successfully Jul 9 23:45:49.896016 ignition[1025]: Ignition 2.21.0 Jul 9 23:45:49.896020 ignition[1025]: Stage: fetch Jul 9 23:45:49.896205 ignition[1025]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:45:49.896212 ignition[1025]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 9 23:45:49.896275 ignition[1025]: parsed url from cmdline: "" Jul 9 23:45:49.896277 ignition[1025]: no config URL provided Jul 9 23:45:49.896280 ignition[1025]: reading system config file "/usr/lib/ignition/user.ign" Jul 9 23:45:49.896286 ignition[1025]: no config at "/usr/lib/ignition/user.ign" Jul 9 23:45:49.896322 ignition[1025]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 9 23:45:49.991773 ignition[1025]: GET result: OK Jul 9 23:45:49.991842 ignition[1025]: config has been read from IMDS userdata Jul 9 23:45:49.991862 ignition[1025]: parsing config with SHA512: b78a3e520a8fa7cfcf32b995241831d996e2fee19971f33df1d72624580ec5878f6b4a5b98a776ab7d78e2ad9afcf4a03fc2638777d11019acbf737728674203 Jul 9 23:45:49.996814 unknown[1025]: fetched base config from "system" Jul 9 23:45:49.997139 ignition[1025]: fetch: fetch complete Jul 9 23:45:49.996819 unknown[1025]: fetched base config from "system" Jul 9 23:45:49.997142 ignition[1025]: fetch: fetch passed Jul 9 23:45:49.996823 unknown[1025]: fetched user config from "azure" Jul 9 23:45:49.997186 ignition[1025]: Ignition finished successfully Jul 9 23:45:49.999048 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 9 23:45:50.007261 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 9 23:45:50.030207 systemd-networkd[1015]: eth0: Gained IPv6LL Jul 9 23:45:50.039979 ignition[1032]: Ignition 2.21.0 Jul 9 23:45:50.039992 ignition[1032]: Stage: kargs Jul 9 23:45:50.040205 ignition[1032]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:45:50.045745 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 9 23:45:50.040214 ignition[1032]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 9 23:45:50.053263 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 9 23:45:50.040661 ignition[1032]: kargs: kargs passed Jul 9 23:45:50.040690 ignition[1032]: Ignition finished successfully Jul 9 23:45:50.078690 ignition[1038]: Ignition 2.21.0 Jul 9 23:45:50.078705 ignition[1038]: Stage: disks Jul 9 23:45:50.083348 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 9 23:45:50.078834 ignition[1038]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:45:50.089509 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 9 23:45:50.078841 ignition[1038]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 9 23:45:50.093985 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 9 23:45:50.079317 ignition[1038]: disks: disks passed Jul 9 23:45:50.102670 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 23:45:50.079359 ignition[1038]: Ignition finished successfully Jul 9 23:45:50.110201 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 23:45:50.118255 systemd[1]: Reached target basic.target - Basic System. Jul 9 23:45:50.126450 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 9 23:45:50.199981 systemd-fsck[1046]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jul 9 23:45:50.207328 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 9 23:45:50.212925 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 9 23:45:50.395189 kernel: EXT4-fs (sda9): mounted filesystem 961fd3ec-635c-4a87-8aef-ca8f12cd8be8 r/w with ordered data mode. Quota mode: none. Jul 9 23:45:50.395570 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 9 23:45:50.399442 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 9 23:45:50.420975 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 23:45:50.434615 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 9 23:45:50.442284 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 9 23:45:50.451531 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1060) Jul 9 23:45:50.461409 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 9 23:45:50.487817 kernel: BTRFS info (device sda6): first mount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:45:50.487833 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:45:50.487840 kernel: BTRFS info (device sda6): using free-space-tree Jul 9 23:45:50.461438 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 23:45:50.475570 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 23:45:50.479444 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 9 23:45:50.493282 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 9 23:45:50.860273 systemd-networkd[1015]: enP47792s1: Gained IPv6LL Jul 9 23:45:50.875367 coreos-metadata[1062]: Jul 09 23:45:50.875 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 9 23:45:50.882549 coreos-metadata[1062]: Jul 09 23:45:50.882 INFO Fetch successful Jul 9 23:45:50.886537 coreos-metadata[1062]: Jul 09 23:45:50.886 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 9 23:45:50.895015 coreos-metadata[1062]: Jul 09 23:45:50.894 INFO Fetch successful Jul 9 23:45:50.912327 coreos-metadata[1062]: Jul 09 23:45:50.912 INFO wrote hostname ci-4344.1.1-n-5de0cd73c3 to /sysroot/etc/hostname Jul 9 23:45:50.918643 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 9 23:45:51.134156 initrd-setup-root[1091]: cut: /sysroot/etc/passwd: No such file or directory Jul 9 23:45:51.154577 initrd-setup-root[1098]: cut: /sysroot/etc/group: No such file or directory Jul 9 23:45:51.159373 initrd-setup-root[1105]: cut: /sysroot/etc/shadow: No such file or directory Jul 9 23:45:51.164044 initrd-setup-root[1112]: cut: /sysroot/etc/gshadow: No such file or directory Jul 9 23:45:51.754907 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 9 23:45:51.760841 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 9 23:45:51.776536 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 9 23:45:51.793462 kernel: BTRFS info (device sda6): last unmount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:45:51.787852 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 9 23:45:51.812087 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 9 23:45:51.820422 ignition[1181]: INFO : Ignition 2.21.0 Jul 9 23:45:51.820422 ignition[1181]: INFO : Stage: mount Jul 9 23:45:51.831581 ignition[1181]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:45:51.831581 ignition[1181]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 9 23:45:51.831581 ignition[1181]: INFO : mount: mount passed Jul 9 23:45:51.831581 ignition[1181]: INFO : Ignition finished successfully Jul 9 23:45:51.822211 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 9 23:45:51.827973 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 9 23:45:51.855245 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 23:45:51.884788 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1192) Jul 9 23:45:51.884824 kernel: BTRFS info (device sda6): first mount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:45:51.889138 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:45:51.891969 kernel: BTRFS info (device sda6): using free-space-tree Jul 9 23:45:51.894142 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 23:45:51.914872 ignition[1209]: INFO : Ignition 2.21.0 Jul 9 23:45:51.914872 ignition[1209]: INFO : Stage: files Jul 9 23:45:51.922894 ignition[1209]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:45:51.922894 ignition[1209]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 9 23:45:51.922894 ignition[1209]: DEBUG : files: compiled without relabeling support, skipping Jul 9 23:45:51.935222 ignition[1209]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 9 23:45:51.935222 ignition[1209]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 9 23:45:51.977069 ignition[1209]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 9 23:45:51.982282 ignition[1209]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 9 23:45:51.982282 ignition[1209]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 9 23:45:51.977437 unknown[1209]: wrote ssh authorized keys file for user: core Jul 9 23:45:51.995552 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 9 23:45:52.002895 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 9 23:45:52.036361 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 9 23:45:52.210946 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 9 23:45:52.210946 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 9 23:45:52.225436 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 9 23:45:52.225436 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 9 23:45:52.225436 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 9 23:45:52.225436 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 23:45:52.225436 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 23:45:52.225436 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 23:45:52.225436 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 23:45:52.272372 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 23:45:52.272372 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 23:45:52.272372 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 9 23:45:52.272372 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 9 23:45:52.272372 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 9 23:45:52.272372 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 9 23:45:52.879589 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 9 23:45:53.079039 ignition[1209]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 9 23:45:53.079039 ignition[1209]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 9 23:45:53.096326 ignition[1209]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 23:45:53.124362 ignition[1209]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 23:45:53.124362 ignition[1209]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 9 23:45:53.139426 ignition[1209]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 9 23:45:53.139426 ignition[1209]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 9 23:45:53.139426 ignition[1209]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 9 23:45:53.139426 ignition[1209]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 9 23:45:53.139426 ignition[1209]: INFO : files: files passed Jul 9 23:45:53.139426 ignition[1209]: INFO : Ignition finished successfully Jul 9 23:45:53.132888 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 9 23:45:53.144611 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 9 23:45:53.167694 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 9 23:45:53.179238 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 9 23:45:53.180371 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 9 23:45:53.212498 initrd-setup-root-after-ignition[1238]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:45:53.212498 initrd-setup-root-after-ignition[1238]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:45:53.225448 initrd-setup-root-after-ignition[1242]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:45:53.221093 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 23:45:53.230369 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 9 23:45:53.240829 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 9 23:45:53.275431 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 9 23:45:53.275523 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 9 23:45:53.284092 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 9 23:45:53.292029 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 9 23:45:53.299642 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 9 23:45:53.300206 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 9 23:45:53.334824 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 23:45:53.340891 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 9 23:45:53.367753 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:45:53.372290 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:45:53.380800 systemd[1]: Stopped target timers.target - Timer Units. Jul 9 23:45:53.388492 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 9 23:45:53.388575 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 23:45:53.400146 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 9 23:45:53.404166 systemd[1]: Stopped target basic.target - Basic System. Jul 9 23:45:53.409004 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 9 23:45:53.409264 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 23:45:53.409514 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 9 23:45:53.409761 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 9 23:45:53.410013 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 9 23:45:53.410264 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 23:45:53.410521 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 9 23:45:53.410798 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 9 23:45:53.411038 systemd[1]: Stopped target swap.target - Swaps. Jul 9 23:45:53.411274 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 9 23:45:53.411354 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 9 23:45:53.498586 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:45:53.506737 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:45:53.515110 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 9 23:45:53.515178 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:45:53.524122 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 9 23:45:53.524223 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 9 23:45:53.537043 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 9 23:45:53.537123 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 23:45:53.542266 systemd[1]: ignition-files.service: Deactivated successfully. Jul 9 23:45:53.542331 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 9 23:45:53.551865 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 9 23:45:53.551928 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 9 23:45:53.616737 ignition[1262]: INFO : Ignition 2.21.0 Jul 9 23:45:53.616737 ignition[1262]: INFO : Stage: umount Jul 9 23:45:53.616737 ignition[1262]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:45:53.616737 ignition[1262]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 9 23:45:53.616737 ignition[1262]: INFO : umount: umount passed Jul 9 23:45:53.616737 ignition[1262]: INFO : Ignition finished successfully Jul 9 23:45:53.565312 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 9 23:45:53.587030 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 9 23:45:53.598138 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 9 23:45:53.598263 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:45:53.611216 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 9 23:45:53.611311 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 23:45:53.624770 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 9 23:45:53.625535 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 9 23:45:53.625622 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 9 23:45:53.633660 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 9 23:45:53.633734 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 9 23:45:53.641498 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 9 23:45:53.641564 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 9 23:45:53.648459 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 9 23:45:53.648493 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 9 23:45:53.656012 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 9 23:45:53.656039 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 9 23:45:53.663823 systemd[1]: Stopped target network.target - Network. Jul 9 23:45:53.670878 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 9 23:45:53.670912 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 23:45:53.679608 systemd[1]: Stopped target paths.target - Path Units. Jul 9 23:45:53.687749 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 9 23:45:53.691279 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:45:53.696790 systemd[1]: Stopped target slices.target - Slice Units. Jul 9 23:45:53.706421 systemd[1]: Stopped target sockets.target - Socket Units. Jul 9 23:45:53.713853 systemd[1]: iscsid.socket: Deactivated successfully. Jul 9 23:45:53.713890 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 23:45:53.717768 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 9 23:45:53.717791 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 23:45:53.726122 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 9 23:45:53.726162 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 9 23:45:53.734318 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 9 23:45:53.734346 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 9 23:45:53.742797 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 9 23:45:53.751426 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 9 23:45:53.764525 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 9 23:45:53.764795 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 9 23:45:53.777981 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 9 23:45:53.778140 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 9 23:45:53.778218 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 9 23:45:53.956506 kernel: hv_netvsc 002248be-674e-0022-48be-674e002248be eth0: Data path switched from VF: enP47792s1 Jul 9 23:45:53.790394 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 9 23:45:53.791035 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 9 23:45:53.798372 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 9 23:45:53.798404 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:45:53.807415 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 9 23:45:53.820413 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 9 23:45:53.820472 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 23:45:53.829470 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 23:45:53.829505 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:45:53.845917 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 9 23:45:53.845960 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 9 23:45:53.850852 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 9 23:45:53.850898 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:45:53.862323 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:45:53.870393 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 9 23:45:53.870454 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 9 23:45:53.896759 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 9 23:45:53.896904 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:45:53.905214 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 9 23:45:53.905245 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 9 23:45:53.913818 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 9 23:45:53.913842 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:45:53.921316 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 9 23:45:53.921353 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 9 23:45:53.934482 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 9 23:45:53.934531 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 9 23:45:53.947972 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 9 23:45:53.948019 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 23:45:53.957114 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 9 23:45:53.971412 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 9 23:45:53.971472 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 23:45:53.983861 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 9 23:45:53.983900 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:45:53.992208 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 9 23:45:53.992254 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 23:45:54.000538 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 9 23:45:54.000572 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:45:54.005578 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 23:45:54.005611 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:45:54.018963 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 9 23:45:54.019006 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 9 23:45:54.019029 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 9 23:45:54.019052 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 9 23:45:54.019312 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 9 23:45:54.019369 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 9 23:45:54.053618 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 9 23:45:54.053712 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 9 23:45:54.233840 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 9 23:45:54.233932 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 9 23:45:54.241473 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 9 23:45:54.249395 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 9 23:45:54.249451 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 9 23:45:54.257890 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 9 23:45:54.279160 systemd[1]: Switching root. Jul 9 23:45:54.390992 systemd-journald[224]: Journal stopped Jul 9 23:45:58.338851 systemd-journald[224]: Received SIGTERM from PID 1 (systemd). Jul 9 23:45:58.338869 kernel: SELinux: policy capability network_peer_controls=1 Jul 9 23:45:58.338878 kernel: SELinux: policy capability open_perms=1 Jul 9 23:45:58.338885 kernel: SELinux: policy capability extended_socket_class=1 Jul 9 23:45:58.338890 kernel: SELinux: policy capability always_check_network=0 Jul 9 23:45:58.338895 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 9 23:45:58.338902 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 9 23:45:58.338907 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 9 23:45:58.338913 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 9 23:45:58.338918 kernel: SELinux: policy capability userspace_initial_context=0 Jul 9 23:45:58.338925 systemd[1]: Successfully loaded SELinux policy in 140.765ms. Jul 9 23:45:58.338931 kernel: audit: type=1403 audit(1752104754.997:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 9 23:45:58.338937 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.827ms. Jul 9 23:45:58.338944 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 23:45:58.338951 systemd[1]: Detected virtualization microsoft. Jul 9 23:45:58.338958 systemd[1]: Detected architecture arm64. Jul 9 23:45:58.338966 systemd[1]: Detected first boot. Jul 9 23:45:58.338972 systemd[1]: Hostname set to . Jul 9 23:45:58.338978 systemd[1]: Initializing machine ID from random generator. Jul 9 23:45:58.338985 zram_generator::config[1305]: No configuration found. Jul 9 23:45:58.338991 kernel: NET: Registered PF_VSOCK protocol family Jul 9 23:45:58.338997 systemd[1]: Populated /etc with preset unit settings. Jul 9 23:45:58.339004 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 9 23:45:58.339011 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 9 23:45:58.339017 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 9 23:45:58.339023 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 9 23:45:58.339029 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 9 23:45:58.339036 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 9 23:45:58.339042 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 9 23:45:58.339049 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 9 23:45:58.339055 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 9 23:45:58.339061 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 9 23:45:58.339067 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 9 23:45:58.339073 systemd[1]: Created slice user.slice - User and Session Slice. Jul 9 23:45:58.339080 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:45:58.339086 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:45:58.339092 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 9 23:45:58.339099 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 9 23:45:58.339106 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 9 23:45:58.339112 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 23:45:58.339120 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 9 23:45:58.339126 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:45:58.339132 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:45:58.339139 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 9 23:45:58.339145 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 9 23:45:58.339152 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 9 23:45:58.339158 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 9 23:45:58.339164 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:45:58.339170 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 23:45:58.343604 systemd[1]: Reached target slices.target - Slice Units. Jul 9 23:45:58.343616 systemd[1]: Reached target swap.target - Swaps. Jul 9 23:45:58.343623 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 9 23:45:58.343630 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 9 23:45:58.343641 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 9 23:45:58.343648 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:45:58.343654 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 23:45:58.343660 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:45:58.343667 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 9 23:45:58.343674 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 9 23:45:58.343681 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 9 23:45:58.343687 systemd[1]: Mounting media.mount - External Media Directory... Jul 9 23:45:58.343694 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 9 23:45:58.343700 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 9 23:45:58.343706 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 9 23:45:58.343714 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 9 23:45:58.343720 systemd[1]: Reached target machines.target - Containers. Jul 9 23:45:58.343727 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 9 23:45:58.343734 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 23:45:58.343740 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 23:45:58.343747 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 9 23:45:58.343753 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 23:45:58.343759 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 23:45:58.343766 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 23:45:58.343772 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 9 23:45:58.343779 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 23:45:58.343786 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 9 23:45:58.343792 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 9 23:45:58.343799 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 9 23:45:58.343805 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 9 23:45:58.343811 kernel: fuse: init (API version 7.41) Jul 9 23:45:58.343817 systemd[1]: Stopped systemd-fsck-usr.service. Jul 9 23:45:58.343824 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:45:58.343831 kernel: loop: module loaded Jul 9 23:45:58.343837 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 23:45:58.343843 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 23:45:58.343849 kernel: ACPI: bus type drm_connector registered Jul 9 23:45:58.343855 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 23:45:58.343861 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 9 23:45:58.343868 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 9 23:45:58.343874 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 23:45:58.343881 systemd[1]: verity-setup.service: Deactivated successfully. Jul 9 23:45:58.343887 systemd[1]: Stopped verity-setup.service. Jul 9 23:45:58.343894 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 9 23:45:58.343900 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 9 23:45:58.343906 systemd[1]: Mounted media.mount - External Media Directory. Jul 9 23:45:58.343930 systemd-journald[1409]: Collecting audit messages is disabled. Jul 9 23:45:58.343949 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 9 23:45:58.343956 systemd-journald[1409]: Journal started Jul 9 23:45:58.343970 systemd-journald[1409]: Runtime Journal (/run/log/journal/d69db9229c7b4763844f5831c41d27ac) is 8M, max 78.5M, 70.5M free. Jul 9 23:45:57.586056 systemd[1]: Queued start job for default target multi-user.target. Jul 9 23:45:57.592597 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 9 23:45:57.592870 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 9 23:45:57.593132 systemd[1]: systemd-journald.service: Consumed 2.188s CPU time. Jul 9 23:45:58.353113 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 23:45:58.353823 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 9 23:45:58.358759 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 9 23:45:58.362888 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 9 23:45:58.369409 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:45:58.375795 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 9 23:45:58.376006 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 9 23:45:58.381625 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 23:45:58.381843 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 23:45:58.387089 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 23:45:58.387386 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 23:45:58.393297 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 23:45:58.393424 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 23:45:58.398881 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 9 23:45:58.398991 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 9 23:45:58.403819 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 23:45:58.403938 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 23:45:58.408579 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 23:45:58.413467 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 23:45:58.420214 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 9 23:45:58.425724 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 9 23:45:58.430942 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:45:58.445408 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 23:45:58.451599 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 9 23:45:58.462753 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 9 23:45:58.467952 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 9 23:45:58.467978 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 23:45:58.473004 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 9 23:45:58.479222 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 9 23:45:58.483666 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:45:58.498466 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 9 23:45:58.512814 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 9 23:45:58.517485 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 23:45:58.520141 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 9 23:45:58.524373 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 23:45:58.525034 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:45:58.533257 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 9 23:45:58.539213 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 23:45:58.547115 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 9 23:45:58.554626 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 9 23:45:58.561261 systemd-journald[1409]: Time spent on flushing to /var/log/journal/d69db9229c7b4763844f5831c41d27ac is 44.335ms for 939 entries. Jul 9 23:45:58.561261 systemd-journald[1409]: System Journal (/var/log/journal/d69db9229c7b4763844f5831c41d27ac) is 11.8M, max 2.6G, 2.6G free. Jul 9 23:45:58.673547 systemd-journald[1409]: Received client request to flush runtime journal. Jul 9 23:45:58.673595 systemd-journald[1409]: /var/log/journal/d69db9229c7b4763844f5831c41d27ac/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jul 9 23:45:58.673612 systemd-journald[1409]: Rotating system journal. Jul 9 23:45:58.673628 kernel: loop0: detected capacity change from 0 to 107312 Jul 9 23:45:58.566968 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 9 23:45:58.573550 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 9 23:45:58.582253 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 9 23:45:58.622231 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:45:58.657740 systemd-tmpfiles[1446]: ACLs are not supported, ignoring. Jul 9 23:45:58.657747 systemd-tmpfiles[1446]: ACLs are not supported, ignoring. Jul 9 23:45:58.661135 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 23:45:58.669701 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 9 23:45:58.677462 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 9 23:45:58.685046 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 9 23:45:58.685530 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 9 23:45:58.995201 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 9 23:45:59.012766 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 9 23:45:59.018179 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 23:45:59.034617 systemd-tmpfiles[1464]: ACLs are not supported, ignoring. Jul 9 23:45:59.034831 systemd-tmpfiles[1464]: ACLs are not supported, ignoring. Jul 9 23:45:59.037757 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:45:59.058192 kernel: loop1: detected capacity change from 0 to 138376 Jul 9 23:45:59.422198 kernel: loop2: detected capacity change from 0 to 28936 Jul 9 23:45:59.522636 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 9 23:45:59.529264 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:45:59.555560 systemd-udevd[1469]: Using default interface naming scheme 'v255'. Jul 9 23:45:59.661151 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:45:59.671661 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 23:45:59.708235 kernel: loop3: detected capacity change from 0 to 203944 Jul 9 23:45:59.716389 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 9 23:45:59.752193 kernel: loop4: detected capacity change from 0 to 107312 Jul 9 23:45:59.766233 kernel: loop5: detected capacity change from 0 to 138376 Jul 9 23:45:59.772793 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 9 23:45:59.780227 kernel: loop6: detected capacity change from 0 to 28936 Jul 9 23:45:59.790193 kernel: loop7: detected capacity change from 0 to 203944 Jul 9 23:45:59.799478 (sd-merge)[1503]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 9 23:45:59.802405 (sd-merge)[1503]: Merged extensions into '/usr'. Jul 9 23:45:59.813067 systemd[1]: Reload requested from client PID 1444 ('systemd-sysext') (unit systemd-sysext.service)... Jul 9 23:45:59.813083 systemd[1]: Reloading... Jul 9 23:45:59.850267 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#53 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 9 23:45:59.862200 kernel: mousedev: PS/2 mouse device common for all mice Jul 9 23:45:59.913063 kernel: hv_vmbus: registering driver hv_balloon Jul 9 23:45:59.913127 kernel: hv_vmbus: registering driver hyperv_fb Jul 9 23:45:59.913141 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 9 23:45:59.921192 kernel: hv_balloon: Memory hot add disabled on ARM64 Jul 9 23:45:59.927092 zram_generator::config[1571]: No configuration found. Jul 9 23:45:59.939190 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 9 23:45:59.948332 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 9 23:45:59.958949 kernel: Console: switching to colour dummy device 80x25 Jul 9 23:45:59.966117 kernel: Console: switching to colour frame buffer device 128x48 Jul 9 23:46:00.048880 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:46:00.131206 kernel: MACsec IEEE 802.1AE Jul 9 23:46:00.166561 systemd[1]: Reloading finished in 352 ms. Jul 9 23:46:00.173228 systemd-networkd[1490]: lo: Link UP Jul 9 23:46:00.173235 systemd-networkd[1490]: lo: Gained carrier Jul 9 23:46:00.174631 systemd-networkd[1490]: Enumeration completed Jul 9 23:46:00.174880 systemd-networkd[1490]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:46:00.174887 systemd-networkd[1490]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 23:46:00.179916 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 9 23:46:00.184669 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 23:46:00.189438 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 9 23:46:00.217122 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 9 23:46:00.233187 kernel: mlx5_core bab0:00:02.0 enP47792s1: Link up Jul 9 23:46:00.238068 systemd[1]: Starting ensure-sysext.service... Jul 9 23:46:00.241520 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 9 23:46:00.247382 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 9 23:46:00.257444 kernel: hv_netvsc 002248be-674e-0022-48be-674e002248be eth0: Data path switched to VF: enP47792s1 Jul 9 23:46:00.257596 systemd-networkd[1490]: enP47792s1: Link UP Jul 9 23:46:00.257700 systemd-networkd[1490]: eth0: Link UP Jul 9 23:46:00.257703 systemd-networkd[1490]: eth0: Gained carrier Jul 9 23:46:00.257737 systemd-networkd[1490]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:46:00.261857 systemd-networkd[1490]: enP47792s1: Gained carrier Jul 9 23:46:00.262389 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 9 23:46:00.269343 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 23:46:00.274215 systemd-networkd[1490]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 9 23:46:00.276346 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:46:00.291653 systemd-tmpfiles[1687]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 9 23:46:00.291943 systemd[1]: Reload requested from client PID 1682 ('systemctl') (unit ensure-sysext.service)... Jul 9 23:46:00.291955 systemd[1]: Reloading... Jul 9 23:46:00.292079 systemd-tmpfiles[1687]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 9 23:46:00.292345 systemd-tmpfiles[1687]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 9 23:46:00.294072 systemd-tmpfiles[1687]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 9 23:46:00.294509 systemd-tmpfiles[1687]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 9 23:46:00.294642 systemd-tmpfiles[1687]: ACLs are not supported, ignoring. Jul 9 23:46:00.294671 systemd-tmpfiles[1687]: ACLs are not supported, ignoring. Jul 9 23:46:00.310966 systemd-tmpfiles[1687]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 23:46:00.310975 systemd-tmpfiles[1687]: Skipping /boot Jul 9 23:46:00.318158 systemd-tmpfiles[1687]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 23:46:00.318257 systemd-tmpfiles[1687]: Skipping /boot Jul 9 23:46:00.352193 zram_generator::config[1720]: No configuration found. Jul 9 23:46:00.424287 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:46:00.502966 systemd[1]: Reloading finished in 210 ms. Jul 9 23:46:00.521197 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 9 23:46:00.537195 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 9 23:46:00.542804 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:46:00.554919 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 23:46:00.576111 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 9 23:46:00.582485 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 23:46:00.586869 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 23:46:00.593426 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 23:46:00.599588 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 23:46:00.603991 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:46:00.604160 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:46:00.605344 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 9 23:46:00.614676 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 23:46:00.621974 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 9 23:46:00.629485 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 23:46:00.631566 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 23:46:00.636934 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 23:46:00.637059 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 23:46:00.643002 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 23:46:00.643130 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 23:46:00.653841 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 23:46:00.657392 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 23:46:00.666296 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 23:46:00.675051 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 23:46:00.680796 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:46:00.680987 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:46:00.683692 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 23:46:00.683830 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 23:46:00.688758 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 23:46:00.688876 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 23:46:00.695920 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 23:46:00.696301 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 23:46:00.698916 augenrules[1817]: No rules Jul 9 23:46:00.702200 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 23:46:00.702338 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 23:46:00.709360 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 9 23:46:00.724699 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 23:46:00.728402 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 23:46:00.735768 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 23:46:00.742713 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 23:46:00.746762 augenrules[1828]: /sbin/augenrules: No change Jul 9 23:46:00.748346 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 23:46:00.754699 augenrules[1848]: No rules Jul 9 23:46:00.757187 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 23:46:00.762795 systemd-resolved[1795]: Positive Trust Anchors: Jul 9 23:46:00.763216 systemd-resolved[1795]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 23:46:00.763318 systemd-resolved[1795]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 23:46:00.763467 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:46:00.763562 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:46:00.763663 systemd[1]: Reached target time-set.target - System Time Set. Jul 9 23:46:00.773309 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 9 23:46:00.778893 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 23:46:00.780205 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 23:46:00.785454 systemd-resolved[1795]: Using system hostname 'ci-4344.1.1-n-5de0cd73c3'. Jul 9 23:46:00.786752 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 23:46:00.786882 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 23:46:00.792372 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 23:46:00.798233 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:46:00.803159 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 23:46:00.803322 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 23:46:00.807850 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 23:46:00.807959 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 23:46:00.812922 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 23:46:00.813046 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 23:46:00.819876 systemd[1]: Finished ensure-sysext.service. Jul 9 23:46:00.826480 systemd[1]: Reached target network.target - Network. Jul 9 23:46:00.830051 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:46:00.835161 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 23:46:00.835209 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 23:46:01.156292 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 9 23:46:01.162658 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 9 23:46:01.612331 systemd-networkd[1490]: enP47792s1: Gained IPv6LL Jul 9 23:46:01.932273 systemd-networkd[1490]: eth0: Gained IPv6LL Jul 9 23:46:01.935245 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 9 23:46:01.940516 systemd[1]: Reached target network-online.target - Network is Online. Jul 9 23:46:04.084204 ldconfig[1439]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 9 23:46:04.103805 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 9 23:46:04.110811 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 9 23:46:04.122513 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 9 23:46:04.127347 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 23:46:04.131582 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 9 23:46:04.136374 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 9 23:46:04.141429 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 9 23:46:04.145867 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 9 23:46:04.150609 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 9 23:46:04.155653 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 9 23:46:04.155673 systemd[1]: Reached target paths.target - Path Units. Jul 9 23:46:04.159326 systemd[1]: Reached target timers.target - Timer Units. Jul 9 23:46:04.164028 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 9 23:46:04.169370 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 9 23:46:04.174734 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 9 23:46:04.179956 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 9 23:46:04.184869 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 9 23:46:04.190502 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 9 23:46:04.194946 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 9 23:46:04.200031 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 9 23:46:04.204270 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 23:46:04.208690 systemd[1]: Reached target basic.target - Basic System. Jul 9 23:46:04.212302 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 9 23:46:04.212320 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 9 23:46:04.213934 systemd[1]: Starting chronyd.service - NTP client/server... Jul 9 23:46:04.226268 systemd[1]: Starting containerd.service - containerd container runtime... Jul 9 23:46:04.239364 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 9 23:46:04.245822 (chronyd)[1870]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 9 23:46:04.246216 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 9 23:46:04.259945 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 9 23:46:04.265959 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 9 23:46:04.271344 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 9 23:46:04.272403 jq[1878]: false Jul 9 23:46:04.280076 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 9 23:46:04.283288 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 9 23:46:04.289483 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 9 23:46:04.290294 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:46:04.296077 chronyd[1886]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 9 23:46:04.303407 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 9 23:46:04.306329 KVP[1880]: KVP starting; pid is:1880 Jul 9 23:46:04.311481 kernel: hv_utils: KVP IC version 4.0 Jul 9 23:46:04.310868 KVP[1880]: KVP LIC Version: 3.1 Jul 9 23:46:04.312722 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 9 23:46:04.323955 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 9 23:46:04.327926 extend-filesystems[1879]: Found /dev/sda6 Jul 9 23:46:04.332011 chronyd[1886]: Timezone right/UTC failed leap second check, ignoring Jul 9 23:46:04.332152 chronyd[1886]: Loaded seccomp filter (level 2) Jul 9 23:46:04.333404 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 9 23:46:04.341382 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 9 23:46:04.347025 extend-filesystems[1879]: Found /dev/sda9 Jul 9 23:46:04.350022 extend-filesystems[1879]: Checking size of /dev/sda9 Jul 9 23:46:04.356467 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 9 23:46:04.361806 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 9 23:46:04.362265 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 9 23:46:04.363478 systemd[1]: Starting update-engine.service - Update Engine... Jul 9 23:46:04.371330 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 9 23:46:04.374553 extend-filesystems[1879]: Old size kept for /dev/sda9 Jul 9 23:46:04.379101 systemd[1]: Started chronyd.service - NTP client/server. Jul 9 23:46:04.395217 jq[1909]: true Jul 9 23:46:04.395230 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 9 23:46:04.401692 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 9 23:46:04.401844 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 9 23:46:04.402032 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 9 23:46:04.402145 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 9 23:46:04.408589 systemd[1]: motdgen.service: Deactivated successfully. Jul 9 23:46:04.409229 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 9 23:46:04.416687 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 9 23:46:04.416834 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 9 23:46:04.434096 (ntainerd)[1923]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 9 23:46:04.441250 jq[1922]: true Jul 9 23:46:04.477812 update_engine[1907]: I20250709 23:46:04.468128 1907 main.cc:92] Flatcar Update Engine starting Jul 9 23:46:04.483264 systemd-logind[1904]: New seat seat0. Jul 9 23:46:04.490652 systemd-logind[1904]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 9 23:46:04.490806 systemd[1]: Started systemd-logind.service - User Login Management. Jul 9 23:46:04.494808 tar[1919]: linux-arm64/helm Jul 9 23:46:04.563753 bash[1981]: Updated "/home/core/.ssh/authorized_keys" Jul 9 23:46:04.569966 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 9 23:46:04.578097 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 9 23:46:04.594782 dbus-daemon[1873]: [system] SELinux support is enabled Jul 9 23:46:04.596922 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 9 23:46:04.601107 update_engine[1907]: I20250709 23:46:04.600474 1907 update_check_scheduler.cc:74] Next update check in 3m47s Jul 9 23:46:04.604823 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 9 23:46:04.604851 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 9 23:46:04.612471 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 9 23:46:04.612490 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 9 23:46:04.625037 systemd[1]: Started update-engine.service - Update Engine. Jul 9 23:46:04.625307 dbus-daemon[1873]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 9 23:46:04.636862 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 9 23:46:04.670443 coreos-metadata[1872]: Jul 09 23:46:04.670 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 9 23:46:04.674232 coreos-metadata[1872]: Jul 09 23:46:04.674 INFO Fetch successful Jul 9 23:46:04.674232 coreos-metadata[1872]: Jul 09 23:46:04.674 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 9 23:46:04.679935 coreos-metadata[1872]: Jul 09 23:46:04.679 INFO Fetch successful Jul 9 23:46:04.680326 coreos-metadata[1872]: Jul 09 23:46:04.680 INFO Fetching http://168.63.129.16/machine/6d9d7644-c4fd-4bd4-9a97-d1ad3a3ae9c4/5a375103%2D9619%2D4f2e%2D8862%2D10e273c917f8.%5Fci%2D4344.1.1%2Dn%2D5de0cd73c3?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 9 23:46:04.682360 coreos-metadata[1872]: Jul 09 23:46:04.682 INFO Fetch successful Jul 9 23:46:04.682512 coreos-metadata[1872]: Jul 09 23:46:04.682 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 9 23:46:04.692162 coreos-metadata[1872]: Jul 09 23:46:04.691 INFO Fetch successful Jul 9 23:46:04.728552 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 9 23:46:04.735485 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 9 23:46:04.741702 sshd_keygen[1912]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 9 23:46:04.779194 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 9 23:46:04.791112 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 9 23:46:04.803308 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 9 23:46:04.817429 systemd[1]: issuegen.service: Deactivated successfully. Jul 9 23:46:04.821217 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 9 23:46:04.827870 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 9 23:46:04.836101 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 9 23:46:04.870437 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 9 23:46:04.880401 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 9 23:46:04.890669 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 9 23:46:04.898795 systemd[1]: Reached target getty.target - Login Prompts. Jul 9 23:46:04.913658 locksmithd[2018]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 9 23:46:04.936249 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 9 23:46:05.003515 containerd[1923]: time="2025-07-09T23:46:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 9 23:46:05.006506 containerd[1923]: time="2025-07-09T23:46:05.006478364Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 9 23:46:05.021013 containerd[1923]: time="2025-07-09T23:46:05.020057876Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.744µs" Jul 9 23:46:05.021013 containerd[1923]: time="2025-07-09T23:46:05.020085620Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 9 23:46:05.021013 containerd[1923]: time="2025-07-09T23:46:05.020099668Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 9 23:46:05.021268 containerd[1923]: time="2025-07-09T23:46:05.021244332Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 9 23:46:05.021350 containerd[1923]: time="2025-07-09T23:46:05.021336540Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 9 23:46:05.021409 containerd[1923]: time="2025-07-09T23:46:05.021398476Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 23:46:05.021507 containerd[1923]: time="2025-07-09T23:46:05.021493012Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 23:46:05.021561 containerd[1923]: time="2025-07-09T23:46:05.021548212Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 23:46:05.022810 containerd[1923]: time="2025-07-09T23:46:05.021770516Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 23:46:05.022810 containerd[1923]: time="2025-07-09T23:46:05.021788044Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 23:46:05.022810 containerd[1923]: time="2025-07-09T23:46:05.021797204Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 23:46:05.022810 containerd[1923]: time="2025-07-09T23:46:05.021802188Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 9 23:46:05.022810 containerd[1923]: time="2025-07-09T23:46:05.021870036Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 9 23:46:05.022810 containerd[1923]: time="2025-07-09T23:46:05.022004124Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 23:46:05.022810 containerd[1923]: time="2025-07-09T23:46:05.022022628Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 23:46:05.022810 containerd[1923]: time="2025-07-09T23:46:05.022029516Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 9 23:46:05.022810 containerd[1923]: time="2025-07-09T23:46:05.022055796Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 9 23:46:05.022810 containerd[1923]: time="2025-07-09T23:46:05.022201084Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 9 23:46:05.022810 containerd[1923]: time="2025-07-09T23:46:05.022257652Z" level=info msg="metadata content store policy set" policy=shared Jul 9 23:46:05.037191 containerd[1923]: time="2025-07-09T23:46:05.037154996Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 9 23:46:05.037359 containerd[1923]: time="2025-07-09T23:46:05.037338460Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 9 23:46:05.037570 containerd[1923]: time="2025-07-09T23:46:05.037524188Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 9 23:46:05.039197 containerd[1923]: time="2025-07-09T23:46:05.038270372Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 9 23:46:05.039197 containerd[1923]: time="2025-07-09T23:46:05.038383892Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 9 23:46:05.039197 containerd[1923]: time="2025-07-09T23:46:05.038395820Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 9 23:46:05.039197 containerd[1923]: time="2025-07-09T23:46:05.038405844Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 9 23:46:05.039197 containerd[1923]: time="2025-07-09T23:46:05.038414540Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 9 23:46:05.039197 containerd[1923]: time="2025-07-09T23:46:05.038421916Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 9 23:46:05.039197 containerd[1923]: time="2025-07-09T23:46:05.038428972Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 9 23:46:05.039197 containerd[1923]: time="2025-07-09T23:46:05.038434612Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 9 23:46:05.039197 containerd[1923]: time="2025-07-09T23:46:05.038442980Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 9 23:46:05.039197 containerd[1923]: time="2025-07-09T23:46:05.038542308Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 9 23:46:05.039197 containerd[1923]: time="2025-07-09T23:46:05.038556548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 9 23:46:05.039197 containerd[1923]: time="2025-07-09T23:46:05.038567044Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 9 23:46:05.039197 containerd[1923]: time="2025-07-09T23:46:05.038573948Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 9 23:46:05.039197 containerd[1923]: time="2025-07-09T23:46:05.038581108Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 9 23:46:05.039416 containerd[1923]: time="2025-07-09T23:46:05.038588532Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 9 23:46:05.039416 containerd[1923]: time="2025-07-09T23:46:05.038596140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 9 23:46:05.039416 containerd[1923]: time="2025-07-09T23:46:05.038603572Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 9 23:46:05.039416 containerd[1923]: time="2025-07-09T23:46:05.038616764Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 9 23:46:05.039416 containerd[1923]: time="2025-07-09T23:46:05.038624188Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 9 23:46:05.039416 containerd[1923]: time="2025-07-09T23:46:05.038633828Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 9 23:46:05.039416 containerd[1923]: time="2025-07-09T23:46:05.038681588Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 9 23:46:05.039416 containerd[1923]: time="2025-07-09T23:46:05.038691780Z" level=info msg="Start snapshots syncer" Jul 9 23:46:05.039416 containerd[1923]: time="2025-07-09T23:46:05.038717428Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 9 23:46:05.039520 containerd[1923]: time="2025-07-09T23:46:05.038863756Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 9 23:46:05.039520 containerd[1923]: time="2025-07-09T23:46:05.038894772Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 9 23:46:05.039596 containerd[1923]: time="2025-07-09T23:46:05.038959132Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 9 23:46:05.039596 containerd[1923]: time="2025-07-09T23:46:05.039051476Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 9 23:46:05.039596 containerd[1923]: time="2025-07-09T23:46:05.039066084Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 9 23:46:05.039596 containerd[1923]: time="2025-07-09T23:46:05.039073052Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 9 23:46:05.039596 containerd[1923]: time="2025-07-09T23:46:05.039081844Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 9 23:46:05.039596 containerd[1923]: time="2025-07-09T23:46:05.039089420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 9 23:46:05.039596 containerd[1923]: time="2025-07-09T23:46:05.039095828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 9 23:46:05.039596 containerd[1923]: time="2025-07-09T23:46:05.039102540Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 9 23:46:05.039596 containerd[1923]: time="2025-07-09T23:46:05.039118908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 9 23:46:05.039596 containerd[1923]: time="2025-07-09T23:46:05.039125652Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 9 23:46:05.039596 containerd[1923]: time="2025-07-09T23:46:05.039131708Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 9 23:46:05.039596 containerd[1923]: time="2025-07-09T23:46:05.039163340Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 23:46:05.040147 containerd[1923]: time="2025-07-09T23:46:05.040122228Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 23:46:05.040534 containerd[1923]: time="2025-07-09T23:46:05.040516412Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 23:46:05.040534 containerd[1923]: time="2025-07-09T23:46:05.041203668Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 23:46:05.040534 containerd[1923]: time="2025-07-09T23:46:05.041214684Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 9 23:46:05.040534 containerd[1923]: time="2025-07-09T23:46:05.041223228Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 9 23:46:05.040534 containerd[1923]: time="2025-07-09T23:46:05.041230876Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 9 23:46:05.040534 containerd[1923]: time="2025-07-09T23:46:05.041247084Z" level=info msg="runtime interface created" Jul 9 23:46:05.040534 containerd[1923]: time="2025-07-09T23:46:05.041250764Z" level=info msg="created NRI interface" Jul 9 23:46:05.041471 containerd[1923]: time="2025-07-09T23:46:05.041257588Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 9 23:46:05.041471 containerd[1923]: time="2025-07-09T23:46:05.041408156Z" level=info msg="Connect containerd service" Jul 9 23:46:05.041471 containerd[1923]: time="2025-07-09T23:46:05.041440964Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 9 23:46:05.042751 tar[1919]: linux-arm64/LICENSE Jul 9 23:46:05.042800 tar[1919]: linux-arm64/README.md Jul 9 23:46:05.043307 containerd[1923]: time="2025-07-09T23:46:05.043288284Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 23:46:05.056393 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 9 23:46:05.156115 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:05.161112 (kubelet)[2072]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:46:05.399000 kubelet[2072]: E0709 23:46:05.398936 2072 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:46:05.401232 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:46:05.401330 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:46:05.401552 systemd[1]: kubelet.service: Consumed 540ms CPU time, 255.7M memory peak. Jul 9 23:46:05.762079 containerd[1923]: time="2025-07-09T23:46:05.757251004Z" level=info msg="Start subscribing containerd event" Jul 9 23:46:05.762079 containerd[1923]: time="2025-07-09T23:46:05.757317332Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 9 23:46:05.762079 containerd[1923]: time="2025-07-09T23:46:05.757320444Z" level=info msg="Start recovering state" Jul 9 23:46:05.762079 containerd[1923]: time="2025-07-09T23:46:05.757359948Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 9 23:46:05.762079 containerd[1923]: time="2025-07-09T23:46:05.757411564Z" level=info msg="Start event monitor" Jul 9 23:46:05.762079 containerd[1923]: time="2025-07-09T23:46:05.757425076Z" level=info msg="Start cni network conf syncer for default" Jul 9 23:46:05.762079 containerd[1923]: time="2025-07-09T23:46:05.757432196Z" level=info msg="Start streaming server" Jul 9 23:46:05.762079 containerd[1923]: time="2025-07-09T23:46:05.757439212Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 9 23:46:05.762079 containerd[1923]: time="2025-07-09T23:46:05.757443948Z" level=info msg="runtime interface starting up..." Jul 9 23:46:05.762079 containerd[1923]: time="2025-07-09T23:46:05.757449004Z" level=info msg="starting plugins..." Jul 9 23:46:05.762079 containerd[1923]: time="2025-07-09T23:46:05.757460348Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 9 23:46:05.762079 containerd[1923]: time="2025-07-09T23:46:05.757565348Z" level=info msg="containerd successfully booted in 0.754891s" Jul 9 23:46:05.757751 systemd[1]: Started containerd.service - containerd container runtime. Jul 9 23:46:05.762899 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 9 23:46:05.773230 systemd[1]: Startup finished in 1.701s (kernel) + 10.327s (initrd) + 10.915s (userspace) = 22.944s. Jul 9 23:46:06.011287 login[2051]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:46:06.011628 login[2049]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:46:06.017990 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 9 23:46:06.020347 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 9 23:46:06.026169 systemd-logind[1904]: New session 1 of user core. Jul 9 23:46:06.028629 systemd-logind[1904]: New session 2 of user core. Jul 9 23:46:06.046160 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 9 23:46:06.048763 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 9 23:46:06.072052 (systemd)[2096]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 9 23:46:06.073952 systemd-logind[1904]: New session c1 of user core. Jul 9 23:46:06.212678 systemd[2096]: Queued start job for default target default.target. Jul 9 23:46:06.217838 systemd[2096]: Created slice app.slice - User Application Slice. Jul 9 23:46:06.217962 systemd[2096]: Reached target paths.target - Paths. Jul 9 23:46:06.218056 systemd[2096]: Reached target timers.target - Timers. Jul 9 23:46:06.219024 systemd[2096]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 9 23:46:06.226439 systemd[2096]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 9 23:46:06.226482 systemd[2096]: Reached target sockets.target - Sockets. Jul 9 23:46:06.226512 systemd[2096]: Reached target basic.target - Basic System. Jul 9 23:46:06.226531 systemd[2096]: Reached target default.target - Main User Target. Jul 9 23:46:06.226549 systemd[2096]: Startup finished in 148ms. Jul 9 23:46:06.226823 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 9 23:46:06.228761 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 9 23:46:06.229561 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 9 23:46:06.254820 waagent[2042]: 2025-07-09T23:46:06.250603Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jul 9 23:46:06.255065 waagent[2042]: 2025-07-09T23:46:06.254994Z INFO Daemon Daemon OS: flatcar 4344.1.1 Jul 9 23:46:06.258222 waagent[2042]: 2025-07-09T23:46:06.258193Z INFO Daemon Daemon Python: 3.11.12 Jul 9 23:46:06.263773 waagent[2042]: 2025-07-09T23:46:06.263394Z INFO Daemon Daemon Run daemon Jul 9 23:46:06.267800 waagent[2042]: 2025-07-09T23:46:06.267755Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4344.1.1' Jul 9 23:46:06.274648 waagent[2042]: 2025-07-09T23:46:06.274467Z INFO Daemon Daemon Using waagent for provisioning Jul 9 23:46:06.278357 waagent[2042]: 2025-07-09T23:46:06.278321Z INFO Daemon Daemon Activate resource disk Jul 9 23:46:06.281646 waagent[2042]: 2025-07-09T23:46:06.281618Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 9 23:46:06.290658 waagent[2042]: 2025-07-09T23:46:06.290619Z INFO Daemon Daemon Found device: None Jul 9 23:46:06.294095 waagent[2042]: 2025-07-09T23:46:06.294066Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 9 23:46:06.299931 waagent[2042]: 2025-07-09T23:46:06.299905Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 9 23:46:06.308289 waagent[2042]: 2025-07-09T23:46:06.308252Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 9 23:46:06.312453 waagent[2042]: 2025-07-09T23:46:06.312422Z INFO Daemon Daemon Running default provisioning handler Jul 9 23:46:06.320152 waagent[2042]: 2025-07-09T23:46:06.320117Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 9 23:46:06.330933 waagent[2042]: 2025-07-09T23:46:06.330891Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 9 23:46:06.338090 waagent[2042]: 2025-07-09T23:46:06.338054Z INFO Daemon Daemon cloud-init is enabled: False Jul 9 23:46:06.342100 waagent[2042]: 2025-07-09T23:46:06.342061Z INFO Daemon Daemon Copying ovf-env.xml Jul 9 23:46:06.445189 waagent[2042]: 2025-07-09T23:46:06.442346Z INFO Daemon Daemon Successfully mounted dvd Jul 9 23:46:06.467015 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 9 23:46:06.468726 waagent[2042]: 2025-07-09T23:46:06.468690Z INFO Daemon Daemon Detect protocol endpoint Jul 9 23:46:06.472415 waagent[2042]: 2025-07-09T23:46:06.472381Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 9 23:46:06.477189 waagent[2042]: 2025-07-09T23:46:06.476523Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 9 23:46:06.481272 waagent[2042]: 2025-07-09T23:46:06.481245Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 9 23:46:06.486116 waagent[2042]: 2025-07-09T23:46:06.486087Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 9 23:46:06.489759 waagent[2042]: 2025-07-09T23:46:06.489736Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 9 23:46:06.530686 waagent[2042]: 2025-07-09T23:46:06.530630Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 9 23:46:06.535425 waagent[2042]: 2025-07-09T23:46:06.535406Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 9 23:46:06.539319 waagent[2042]: 2025-07-09T23:46:06.539295Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 9 23:46:06.626204 waagent[2042]: 2025-07-09T23:46:06.625483Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 9 23:46:06.630330 waagent[2042]: 2025-07-09T23:46:06.630294Z INFO Daemon Daemon Forcing an update of the goal state. Jul 9 23:46:06.636996 waagent[2042]: 2025-07-09T23:46:06.636964Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 9 23:46:06.653374 waagent[2042]: 2025-07-09T23:46:06.653348Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 9 23:46:06.657471 waagent[2042]: 2025-07-09T23:46:06.657443Z INFO Daemon Jul 9 23:46:06.659429 waagent[2042]: 2025-07-09T23:46:06.659405Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: f99cb8dc-ed65-4b60-8e52-48a18da4c84f eTag: 15661765646254070058 source: Fabric] Jul 9 23:46:06.667223 waagent[2042]: 2025-07-09T23:46:06.667198Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 9 23:46:06.671840 waagent[2042]: 2025-07-09T23:46:06.671813Z INFO Daemon Jul 9 23:46:06.673973 waagent[2042]: 2025-07-09T23:46:06.673951Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 9 23:46:06.681645 waagent[2042]: 2025-07-09T23:46:06.681622Z INFO Daemon Daemon Downloading artifacts profile blob Jul 9 23:46:06.737669 waagent[2042]: 2025-07-09T23:46:06.737619Z INFO Daemon Downloaded certificate {'thumbprint': '9128ABDD232547D3B52C8CAD7D4820E1307F1F0C', 'hasPrivateKey': True} Jul 9 23:46:06.744875 waagent[2042]: 2025-07-09T23:46:06.744844Z INFO Daemon Downloaded certificate {'thumbprint': '7242974C056C3DDD1A4CCB11F125ECCED99638EA', 'hasPrivateKey': False} Jul 9 23:46:06.751631 waagent[2042]: 2025-07-09T23:46:06.751601Z INFO Daemon Fetch goal state completed Jul 9 23:46:06.760265 waagent[2042]: 2025-07-09T23:46:06.760229Z INFO Daemon Daemon Starting provisioning Jul 9 23:46:06.763730 waagent[2042]: 2025-07-09T23:46:06.763701Z INFO Daemon Daemon Handle ovf-env.xml. Jul 9 23:46:06.767057 waagent[2042]: 2025-07-09T23:46:06.767035Z INFO Daemon Daemon Set hostname [ci-4344.1.1-n-5de0cd73c3] Jul 9 23:46:06.786712 waagent[2042]: 2025-07-09T23:46:06.786674Z INFO Daemon Daemon Publish hostname [ci-4344.1.1-n-5de0cd73c3] Jul 9 23:46:06.790874 waagent[2042]: 2025-07-09T23:46:06.790843Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 9 23:46:06.795008 waagent[2042]: 2025-07-09T23:46:06.794982Z INFO Daemon Daemon Primary interface is [eth0] Jul 9 23:46:06.803359 systemd-networkd[1490]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:46:06.803554 systemd-networkd[1490]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 23:46:06.803590 systemd-networkd[1490]: eth0: DHCP lease lost Jul 9 23:46:06.804299 waagent[2042]: 2025-07-09T23:46:06.804201Z INFO Daemon Daemon Create user account if not exists Jul 9 23:46:06.807976 waagent[2042]: 2025-07-09T23:46:06.807947Z INFO Daemon Daemon User core already exists, skip useradd Jul 9 23:46:06.811904 waagent[2042]: 2025-07-09T23:46:06.811878Z INFO Daemon Daemon Configure sudoer Jul 9 23:46:06.818824 waagent[2042]: 2025-07-09T23:46:06.818784Z INFO Daemon Daemon Configure sshd Jul 9 23:46:06.825546 waagent[2042]: 2025-07-09T23:46:06.825510Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 9 23:46:06.834147 waagent[2042]: 2025-07-09T23:46:06.834123Z INFO Daemon Daemon Deploy ssh public key. Jul 9 23:46:06.839246 systemd-networkd[1490]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 9 23:46:07.894262 waagent[2042]: 2025-07-09T23:46:07.890969Z INFO Daemon Daemon Provisioning complete Jul 9 23:46:07.903691 waagent[2042]: 2025-07-09T23:46:07.903662Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 9 23:46:07.907881 waagent[2042]: 2025-07-09T23:46:07.907854Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 9 23:46:07.914591 waagent[2042]: 2025-07-09T23:46:07.914567Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jul 9 23:46:08.009203 waagent[2153]: 2025-07-09T23:46:08.009115Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jul 9 23:46:08.009799 waagent[2153]: 2025-07-09T23:46:08.009477Z INFO ExtHandler ExtHandler OS: flatcar 4344.1.1 Jul 9 23:46:08.009799 waagent[2153]: 2025-07-09T23:46:08.009542Z INFO ExtHandler ExtHandler Python: 3.11.12 Jul 9 23:46:08.009799 waagent[2153]: 2025-07-09T23:46:08.009580Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jul 9 23:46:08.028854 waagent[2153]: 2025-07-09T23:46:08.028820Z INFO ExtHandler ExtHandler Distro: flatcar-4344.1.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jul 9 23:46:08.029040 waagent[2153]: 2025-07-09T23:46:08.029016Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 9 23:46:08.029185 waagent[2153]: 2025-07-09T23:46:08.029146Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 9 23:46:08.034852 waagent[2153]: 2025-07-09T23:46:08.034292Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 9 23:46:08.038889 waagent[2153]: 2025-07-09T23:46:08.038861Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 9 23:46:08.039368 waagent[2153]: 2025-07-09T23:46:08.039338Z INFO ExtHandler Jul 9 23:46:08.039505 waagent[2153]: 2025-07-09T23:46:08.039481Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 80aa6df2-b3ae-4fab-b085-ec21aed6f750 eTag: 15661765646254070058 source: Fabric] Jul 9 23:46:08.039804 waagent[2153]: 2025-07-09T23:46:08.039776Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 9 23:46:08.040296 waagent[2153]: 2025-07-09T23:46:08.040265Z INFO ExtHandler Jul 9 23:46:08.040430 waagent[2153]: 2025-07-09T23:46:08.040406Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 9 23:46:08.043949 waagent[2153]: 2025-07-09T23:46:08.043923Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 9 23:46:08.101286 waagent[2153]: 2025-07-09T23:46:08.101241Z INFO ExtHandler Downloaded certificate {'thumbprint': '9128ABDD232547D3B52C8CAD7D4820E1307F1F0C', 'hasPrivateKey': True} Jul 9 23:46:08.101652 waagent[2153]: 2025-07-09T23:46:08.101623Z INFO ExtHandler Downloaded certificate {'thumbprint': '7242974C056C3DDD1A4CCB11F125ECCED99638EA', 'hasPrivateKey': False} Jul 9 23:46:08.102059 waagent[2153]: 2025-07-09T23:46:08.102028Z INFO ExtHandler Fetch goal state completed Jul 9 23:46:08.112569 waagent[2153]: 2025-07-09T23:46:08.112539Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Jul 9 23:46:08.115770 waagent[2153]: 2025-07-09T23:46:08.115729Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2153 Jul 9 23:46:08.115958 waagent[2153]: 2025-07-09T23:46:08.115930Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 9 23:46:08.116291 waagent[2153]: 2025-07-09T23:46:08.116261Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jul 9 23:46:08.117417 waagent[2153]: 2025-07-09T23:46:08.117383Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4344.1.1', '', 'Flatcar Container Linux by Kinvolk'] Jul 9 23:46:08.117805 waagent[2153]: 2025-07-09T23:46:08.117774Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4344.1.1', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jul 9 23:46:08.117992 waagent[2153]: 2025-07-09T23:46:08.117964Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jul 9 23:46:08.118543 waagent[2153]: 2025-07-09T23:46:08.118512Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 9 23:46:08.176458 waagent[2153]: 2025-07-09T23:46:08.176105Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 9 23:46:08.176458 waagent[2153]: 2025-07-09T23:46:08.176251Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 9 23:46:08.180317 waagent[2153]: 2025-07-09T23:46:08.180295Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 9 23:46:08.184362 systemd[1]: Reload requested from client PID 2170 ('systemctl') (unit waagent.service)... Jul 9 23:46:08.184374 systemd[1]: Reloading... Jul 9 23:46:08.246225 zram_generator::config[2208]: No configuration found. Jul 9 23:46:08.310194 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:46:08.394878 systemd[1]: Reloading finished in 210 ms. Jul 9 23:46:08.414497 waagent[2153]: 2025-07-09T23:46:08.413854Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 9 23:46:08.414497 waagent[2153]: 2025-07-09T23:46:08.413987Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 9 23:46:08.595214 waagent[2153]: 2025-07-09T23:46:08.595049Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 9 23:46:08.595426 waagent[2153]: 2025-07-09T23:46:08.595388Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jul 9 23:46:08.596055 waagent[2153]: 2025-07-09T23:46:08.596011Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 9 23:46:08.596369 waagent[2153]: 2025-07-09T23:46:08.596323Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 9 23:46:08.596638 waagent[2153]: 2025-07-09T23:46:08.596552Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 9 23:46:08.596804 waagent[2153]: 2025-07-09T23:46:08.596769Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 9 23:46:08.596898 waagent[2153]: 2025-07-09T23:46:08.596862Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 9 23:46:08.597097 waagent[2153]: 2025-07-09T23:46:08.597069Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 9 23:46:08.597147 waagent[2153]: 2025-07-09T23:46:08.597028Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 9 23:46:08.597484 waagent[2153]: 2025-07-09T23:46:08.597443Z INFO EnvHandler ExtHandler Configure routes Jul 9 23:46:08.597683 waagent[2153]: 2025-07-09T23:46:08.597622Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 9 23:46:08.597769 waagent[2153]: 2025-07-09T23:46:08.597702Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 9 23:46:08.597769 waagent[2153]: 2025-07-09T23:46:08.597758Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 9 23:46:08.598435 waagent[2153]: 2025-07-09T23:46:08.598387Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 9 23:46:08.598435 waagent[2153]: 2025-07-09T23:46:08.598237Z INFO EnvHandler ExtHandler Gateway:None Jul 9 23:46:08.598513 waagent[2153]: 2025-07-09T23:46:08.598479Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 9 23:46:08.598751 waagent[2153]: 2025-07-09T23:46:08.598717Z INFO EnvHandler ExtHandler Routes:None Jul 9 23:46:08.599633 waagent[2153]: 2025-07-09T23:46:08.599523Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 9 23:46:08.599633 waagent[2153]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 9 23:46:08.599633 waagent[2153]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 9 23:46:08.599633 waagent[2153]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 9 23:46:08.599633 waagent[2153]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 9 23:46:08.599633 waagent[2153]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 9 23:46:08.599633 waagent[2153]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 9 23:46:08.603118 waagent[2153]: 2025-07-09T23:46:08.603074Z INFO ExtHandler ExtHandler Jul 9 23:46:08.603488 waagent[2153]: 2025-07-09T23:46:08.603438Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 83aaa9b7-d5d6-4a2e-bc4d-633b9d27cb5e correlation 1a3a9279-b62b-4339-aee8-3079efc38c92 created: 2025-07-09T23:45:02.352234Z] Jul 9 23:46:08.604355 waagent[2153]: 2025-07-09T23:46:08.604294Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 9 23:46:08.605441 waagent[2153]: 2025-07-09T23:46:08.605403Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Jul 9 23:46:08.655843 waagent[2153]: 2025-07-09T23:46:08.655786Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jul 9 23:46:08.655843 waagent[2153]: Try `iptables -h' or 'iptables --help' for more information.) Jul 9 23:46:08.657389 waagent[2153]: 2025-07-09T23:46:08.657332Z INFO MonitorHandler ExtHandler Network interfaces: Jul 9 23:46:08.657389 waagent[2153]: Executing ['ip', '-a', '-o', 'link']: Jul 9 23:46:08.657389 waagent[2153]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 9 23:46:08.657389 waagent[2153]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:be:67:4e brd ff:ff:ff:ff:ff:ff Jul 9 23:46:08.657389 waagent[2153]: 3: enP47792s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:be:67:4e brd ff:ff:ff:ff:ff:ff\ altname enP47792p0s2 Jul 9 23:46:08.657389 waagent[2153]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 9 23:46:08.657389 waagent[2153]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 9 23:46:08.657389 waagent[2153]: 2: eth0 inet 10.200.20.11/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 9 23:46:08.657389 waagent[2153]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 9 23:46:08.657389 waagent[2153]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 9 23:46:08.657389 waagent[2153]: 2: eth0 inet6 fe80::222:48ff:febe:674e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 9 23:46:08.657389 waagent[2153]: 3: enP47792s1 inet6 fe80::222:48ff:febe:674e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 9 23:46:08.657584 waagent[2153]: 2025-07-09T23:46:08.657511Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: C8965C1F-6165-4D4E-B726-D367AAA1C912;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jul 9 23:46:08.709329 waagent[2153]: 2025-07-09T23:46:08.709278Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jul 9 23:46:08.709329 waagent[2153]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 9 23:46:08.709329 waagent[2153]: pkts bytes target prot opt in out source destination Jul 9 23:46:08.709329 waagent[2153]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 9 23:46:08.709329 waagent[2153]: pkts bytes target prot opt in out source destination Jul 9 23:46:08.709329 waagent[2153]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 9 23:46:08.709329 waagent[2153]: pkts bytes target prot opt in out source destination Jul 9 23:46:08.709329 waagent[2153]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 9 23:46:08.709329 waagent[2153]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 9 23:46:08.709329 waagent[2153]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 9 23:46:08.711787 waagent[2153]: 2025-07-09T23:46:08.711739Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 9 23:46:08.711787 waagent[2153]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 9 23:46:08.711787 waagent[2153]: pkts bytes target prot opt in out source destination Jul 9 23:46:08.711787 waagent[2153]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 9 23:46:08.711787 waagent[2153]: pkts bytes target prot opt in out source destination Jul 9 23:46:08.711787 waagent[2153]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 9 23:46:08.711787 waagent[2153]: pkts bytes target prot opt in out source destination Jul 9 23:46:08.711787 waagent[2153]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 9 23:46:08.711787 waagent[2153]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 9 23:46:08.711787 waagent[2153]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 9 23:46:08.711972 waagent[2153]: 2025-07-09T23:46:08.711945Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 9 23:46:15.472909 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 9 23:46:15.474160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:46:15.627188 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:15.639469 (kubelet)[2303]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:46:15.705138 kubelet[2303]: E0709 23:46:15.705109 2303 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:46:15.707644 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:46:15.707746 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:46:15.708137 systemd[1]: kubelet.service: Consumed 101ms CPU time, 107.2M memory peak. Jul 9 23:46:19.090007 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 9 23:46:19.091447 systemd[1]: Started sshd@0-10.200.20.11:22-10.200.16.10:39114.service - OpenSSH per-connection server daemon (10.200.16.10:39114). Jul 9 23:46:19.648071 sshd[2310]: Accepted publickey for core from 10.200.16.10 port 39114 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:46:19.649050 sshd-session[2310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:46:19.652832 systemd-logind[1904]: New session 3 of user core. Jul 9 23:46:19.660308 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 9 23:46:20.057354 systemd[1]: Started sshd@1-10.200.20.11:22-10.200.16.10:49730.service - OpenSSH per-connection server daemon (10.200.16.10:49730). Jul 9 23:46:20.508356 sshd[2315]: Accepted publickey for core from 10.200.16.10 port 49730 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:46:20.509356 sshd-session[2315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:46:20.512876 systemd-logind[1904]: New session 4 of user core. Jul 9 23:46:20.520452 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 9 23:46:20.850058 sshd[2317]: Connection closed by 10.200.16.10 port 49730 Jul 9 23:46:20.850633 sshd-session[2315]: pam_unix(sshd:session): session closed for user core Jul 9 23:46:20.853357 systemd[1]: sshd@1-10.200.20.11:22-10.200.16.10:49730.service: Deactivated successfully. Jul 9 23:46:20.854597 systemd[1]: session-4.scope: Deactivated successfully. Jul 9 23:46:20.855210 systemd-logind[1904]: Session 4 logged out. Waiting for processes to exit. Jul 9 23:46:20.856142 systemd-logind[1904]: Removed session 4. Jul 9 23:46:20.938263 systemd[1]: Started sshd@2-10.200.20.11:22-10.200.16.10:49742.service - OpenSSH per-connection server daemon (10.200.16.10:49742). Jul 9 23:46:21.416960 sshd[2323]: Accepted publickey for core from 10.200.16.10 port 49742 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:46:21.418195 sshd-session[2323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:46:21.422161 systemd-logind[1904]: New session 5 of user core. Jul 9 23:46:21.428287 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 9 23:46:21.756340 sshd[2325]: Connection closed by 10.200.16.10 port 49742 Jul 9 23:46:21.756796 sshd-session[2323]: pam_unix(sshd:session): session closed for user core Jul 9 23:46:21.759720 systemd[1]: sshd@2-10.200.20.11:22-10.200.16.10:49742.service: Deactivated successfully. Jul 9 23:46:21.760935 systemd[1]: session-5.scope: Deactivated successfully. Jul 9 23:46:21.763320 systemd-logind[1904]: Session 5 logged out. Waiting for processes to exit. Jul 9 23:46:21.764246 systemd-logind[1904]: Removed session 5. Jul 9 23:46:21.840346 systemd[1]: Started sshd@3-10.200.20.11:22-10.200.16.10:49744.service - OpenSSH per-connection server daemon (10.200.16.10:49744). Jul 9 23:46:22.292454 sshd[2331]: Accepted publickey for core from 10.200.16.10 port 49744 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:46:22.293628 sshd-session[2331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:46:22.297270 systemd-logind[1904]: New session 6 of user core. Jul 9 23:46:22.309306 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 9 23:46:22.635275 sshd[2333]: Connection closed by 10.200.16.10 port 49744 Jul 9 23:46:22.635609 sshd-session[2331]: pam_unix(sshd:session): session closed for user core Jul 9 23:46:22.637715 systemd[1]: sshd@3-10.200.20.11:22-10.200.16.10:49744.service: Deactivated successfully. Jul 9 23:46:22.638968 systemd[1]: session-6.scope: Deactivated successfully. Jul 9 23:46:22.641616 systemd-logind[1904]: Session 6 logged out. Waiting for processes to exit. Jul 9 23:46:22.642699 systemd-logind[1904]: Removed session 6. Jul 9 23:46:22.720301 systemd[1]: Started sshd@4-10.200.20.11:22-10.200.16.10:49746.service - OpenSSH per-connection server daemon (10.200.16.10:49746). Jul 9 23:46:23.171054 sshd[2339]: Accepted publickey for core from 10.200.16.10 port 49746 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:46:23.172079 sshd-session[2339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:46:23.175761 systemd-logind[1904]: New session 7 of user core. Jul 9 23:46:23.186291 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 9 23:46:23.506648 sudo[2342]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 9 23:46:23.506856 sudo[2342]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:46:23.540738 sudo[2342]: pam_unix(sudo:session): session closed for user root Jul 9 23:46:23.629882 sshd[2341]: Connection closed by 10.200.16.10 port 49746 Jul 9 23:46:23.629216 sshd-session[2339]: pam_unix(sshd:session): session closed for user core Jul 9 23:46:23.631980 systemd[1]: sshd@4-10.200.20.11:22-10.200.16.10:49746.service: Deactivated successfully. Jul 9 23:46:23.633290 systemd[1]: session-7.scope: Deactivated successfully. Jul 9 23:46:23.633895 systemd-logind[1904]: Session 7 logged out. Waiting for processes to exit. Jul 9 23:46:23.635058 systemd-logind[1904]: Removed session 7. Jul 9 23:46:23.710325 systemd[1]: Started sshd@5-10.200.20.11:22-10.200.16.10:49760.service - OpenSSH per-connection server daemon (10.200.16.10:49760). Jul 9 23:46:24.166468 sshd[2348]: Accepted publickey for core from 10.200.16.10 port 49760 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:46:24.168531 sshd-session[2348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:46:24.172356 systemd-logind[1904]: New session 8 of user core. Jul 9 23:46:24.178284 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 9 23:46:24.423908 sudo[2352]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 9 23:46:24.424607 sudo[2352]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:46:24.430993 sudo[2352]: pam_unix(sudo:session): session closed for user root Jul 9 23:46:24.434488 sudo[2351]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 9 23:46:24.434681 sudo[2351]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:46:24.441457 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 23:46:24.469000 augenrules[2374]: No rules Jul 9 23:46:24.469983 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 23:46:24.470131 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 23:46:24.472181 sudo[2351]: pam_unix(sudo:session): session closed for user root Jul 9 23:46:24.543497 sshd[2350]: Connection closed by 10.200.16.10 port 49760 Jul 9 23:46:24.543805 sshd-session[2348]: pam_unix(sshd:session): session closed for user core Jul 9 23:46:24.546663 systemd[1]: sshd@5-10.200.20.11:22-10.200.16.10:49760.service: Deactivated successfully. Jul 9 23:46:24.547883 systemd[1]: session-8.scope: Deactivated successfully. Jul 9 23:46:24.548453 systemd-logind[1904]: Session 8 logged out. Waiting for processes to exit. Jul 9 23:46:24.549734 systemd-logind[1904]: Removed session 8. Jul 9 23:46:24.637345 systemd[1]: Started sshd@6-10.200.20.11:22-10.200.16.10:49772.service - OpenSSH per-connection server daemon (10.200.16.10:49772). Jul 9 23:46:25.105902 sshd[2383]: Accepted publickey for core from 10.200.16.10 port 49772 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:46:25.106915 sshd-session[2383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:46:25.110689 systemd-logind[1904]: New session 9 of user core. Jul 9 23:46:25.117286 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 9 23:46:25.373556 sudo[2386]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 9 23:46:25.374137 sudo[2386]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:46:25.722927 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 9 23:46:25.724429 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:46:26.078901 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:26.081439 (kubelet)[2407]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:46:26.108877 kubelet[2407]: E0709 23:46:26.108827 2407 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:46:26.110601 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:46:26.110711 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:46:26.111000 systemd[1]: kubelet.service: Consumed 100ms CPU time, 105.5M memory peak. Jul 9 23:46:26.730965 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 9 23:46:26.740417 (dockerd)[2419]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 9 23:46:27.492740 dockerd[2419]: time="2025-07-09T23:46:27.492496140Z" level=info msg="Starting up" Jul 9 23:46:27.494096 dockerd[2419]: time="2025-07-09T23:46:27.494069548Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 9 23:46:27.617651 systemd[1]: var-lib-docker-metacopy\x2dcheck244409164-merged.mount: Deactivated successfully. Jul 9 23:46:27.644200 dockerd[2419]: time="2025-07-09T23:46:27.644163612Z" level=info msg="Loading containers: start." Jul 9 23:46:27.671201 kernel: Initializing XFRM netlink socket Jul 9 23:46:27.964181 systemd-networkd[1490]: docker0: Link UP Jul 9 23:46:27.988775 dockerd[2419]: time="2025-07-09T23:46:27.988708780Z" level=info msg="Loading containers: done." Jul 9 23:46:28.012240 dockerd[2419]: time="2025-07-09T23:46:28.012191172Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 9 23:46:28.012360 dockerd[2419]: time="2025-07-09T23:46:28.012248084Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 9 23:46:28.012360 dockerd[2419]: time="2025-07-09T23:46:28.012330796Z" level=info msg="Initializing buildkit" Jul 9 23:46:28.068887 dockerd[2419]: time="2025-07-09T23:46:28.068854644Z" level=info msg="Completed buildkit initialization" Jul 9 23:46:28.074019 dockerd[2419]: time="2025-07-09T23:46:28.073982908Z" level=info msg="Daemon has completed initialization" Jul 9 23:46:28.074244 dockerd[2419]: time="2025-07-09T23:46:28.074109508Z" level=info msg="API listen on /run/docker.sock" Jul 9 23:46:28.075297 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 9 23:46:28.122447 chronyd[1886]: Selected source PHC0 Jul 9 23:46:28.685612 containerd[1923]: time="2025-07-09T23:46:28.685294584Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 9 23:46:29.729922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount283993161.mount: Deactivated successfully. Jul 9 23:46:30.908524 containerd[1923]: time="2025-07-09T23:46:30.908474739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:30.918917 containerd[1923]: time="2025-07-09T23:46:30.918890067Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651793" Jul 9 23:46:30.923923 containerd[1923]: time="2025-07-09T23:46:30.923897483Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:30.929113 containerd[1923]: time="2025-07-09T23:46:30.929087595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:30.929719 containerd[1923]: time="2025-07-09T23:46:30.929442771Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 2.244113465s" Jul 9 23:46:30.929719 containerd[1923]: time="2025-07-09T23:46:30.929468011Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 9 23:46:30.930715 containerd[1923]: time="2025-07-09T23:46:30.930694811Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 9 23:46:32.219289 containerd[1923]: time="2025-07-09T23:46:32.219236891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:32.225858 containerd[1923]: time="2025-07-09T23:46:32.225825203Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459677" Jul 9 23:46:32.235012 containerd[1923]: time="2025-07-09T23:46:32.234979675Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:32.244519 containerd[1923]: time="2025-07-09T23:46:32.244469851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:32.245199 containerd[1923]: time="2025-07-09T23:46:32.244915779Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.314196032s" Jul 9 23:46:32.245199 containerd[1923]: time="2025-07-09T23:46:32.244939339Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 9 23:46:32.245623 containerd[1923]: time="2025-07-09T23:46:32.245602299Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 9 23:46:33.435054 containerd[1923]: time="2025-07-09T23:46:33.435004475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:33.439475 containerd[1923]: time="2025-07-09T23:46:33.439314475Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125066" Jul 9 23:46:33.445259 containerd[1923]: time="2025-07-09T23:46:33.445239819Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:33.452650 containerd[1923]: time="2025-07-09T23:46:33.452624579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:33.453256 containerd[1923]: time="2025-07-09T23:46:33.453115963Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.20748852s" Jul 9 23:46:33.453256 containerd[1923]: time="2025-07-09T23:46:33.453143139Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 9 23:46:33.453733 containerd[1923]: time="2025-07-09T23:46:33.453645163Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 9 23:46:34.642365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2812636609.mount: Deactivated successfully. Jul 9 23:46:34.967653 containerd[1923]: time="2025-07-09T23:46:34.967527331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:34.972936 containerd[1923]: time="2025-07-09T23:46:34.972906491Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915957" Jul 9 23:46:34.986827 containerd[1923]: time="2025-07-09T23:46:34.986756403Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:34.993560 containerd[1923]: time="2025-07-09T23:46:34.993529747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:34.994085 containerd[1923]: time="2025-07-09T23:46:34.993773667Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.540104496s" Jul 9 23:46:34.994085 containerd[1923]: time="2025-07-09T23:46:34.993796435Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 9 23:46:34.994304 containerd[1923]: time="2025-07-09T23:46:34.994278467Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 9 23:46:35.765003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount14934964.mount: Deactivated successfully. Jul 9 23:46:36.223230 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 9 23:46:36.224928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:46:36.338685 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:36.340954 (kubelet)[2715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:46:36.449455 kubelet[2715]: E0709 23:46:36.449392 2715 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:46:36.451544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:46:36.451772 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:46:36.452301 systemd[1]: kubelet.service: Consumed 104ms CPU time, 105M memory peak. Jul 9 23:46:38.409594 containerd[1923]: time="2025-07-09T23:46:38.409540811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:38.415469 containerd[1923]: time="2025-07-09T23:46:38.415435605Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jul 9 23:46:38.421247 containerd[1923]: time="2025-07-09T23:46:38.421217732Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:38.428620 containerd[1923]: time="2025-07-09T23:46:38.428563959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:38.429189 containerd[1923]: time="2025-07-09T23:46:38.429075031Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 3.434770468s" Jul 9 23:46:38.429189 containerd[1923]: time="2025-07-09T23:46:38.429102048Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 9 23:46:38.429502 containerd[1923]: time="2025-07-09T23:46:38.429480821Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 9 23:46:39.087107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2304506181.mount: Deactivated successfully. Jul 9 23:46:39.122958 containerd[1923]: time="2025-07-09T23:46:39.122519839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:46:39.126692 containerd[1923]: time="2025-07-09T23:46:39.126668032Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 9 23:46:39.133188 containerd[1923]: time="2025-07-09T23:46:39.133149958Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:46:39.147760 containerd[1923]: time="2025-07-09T23:46:39.147732711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:46:39.148211 containerd[1923]: time="2025-07-09T23:46:39.148094467Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 718.590214ms" Jul 9 23:46:39.148657 containerd[1923]: time="2025-07-09T23:46:39.148567538Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 9 23:46:39.149020 containerd[1923]: time="2025-07-09T23:46:39.148981232Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 9 23:46:40.009013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3843006765.mount: Deactivated successfully. Jul 9 23:46:42.083945 containerd[1923]: time="2025-07-09T23:46:42.083850256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:42.088114 containerd[1923]: time="2025-07-09T23:46:42.088083140Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" Jul 9 23:46:42.093303 containerd[1923]: time="2025-07-09T23:46:42.093268903Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:42.098277 containerd[1923]: time="2025-07-09T23:46:42.098225594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:42.100287 containerd[1923]: time="2025-07-09T23:46:42.099362680Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.950354006s" Jul 9 23:46:42.100287 containerd[1923]: time="2025-07-09T23:46:42.099399457Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 9 23:46:45.427558 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:45.427669 systemd[1]: kubelet.service: Consumed 104ms CPU time, 105M memory peak. Jul 9 23:46:45.429229 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:46:45.448307 systemd[1]: Reload requested from client PID 2843 ('systemctl') (unit session-9.scope)... Jul 9 23:46:45.448318 systemd[1]: Reloading... Jul 9 23:46:45.518282 zram_generator::config[2889]: No configuration found. Jul 9 23:46:45.588926 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:46:45.678931 systemd[1]: Reloading finished in 230 ms. Jul 9 23:46:45.738796 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 9 23:46:45.739003 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 9 23:46:45.739342 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:45.739461 systemd[1]: kubelet.service: Consumed 61ms CPU time, 89.4M memory peak. Jul 9 23:46:45.741281 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:46:45.920110 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:45.923026 (kubelet)[2953]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 23:46:45.946826 kubelet[2953]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:46:45.948205 kubelet[2953]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 9 23:46:45.948205 kubelet[2953]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:46:45.948205 kubelet[2953]: I0709 23:46:45.947128 2953 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 23:46:46.153532 kubelet[2953]: I0709 23:46:46.153482 2953 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 9 23:46:46.153532 kubelet[2953]: I0709 23:46:46.153507 2953 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 23:46:46.153791 kubelet[2953]: I0709 23:46:46.153775 2953 server.go:934] "Client rotation is on, will bootstrap in background" Jul 9 23:46:46.167470 kubelet[2953]: E0709 23:46:46.167444 2953 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:46:46.169124 kubelet[2953]: I0709 23:46:46.169109 2953 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 23:46:46.173873 kubelet[2953]: I0709 23:46:46.173861 2953 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 23:46:46.176792 kubelet[2953]: I0709 23:46:46.176772 2953 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 23:46:46.177265 kubelet[2953]: I0709 23:46:46.177249 2953 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 9 23:46:46.177460 kubelet[2953]: I0709 23:46:46.177435 2953 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 23:46:46.177639 kubelet[2953]: I0709 23:46:46.177521 2953 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-n-5de0cd73c3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 23:46:46.177758 kubelet[2953]: I0709 23:46:46.177748 2953 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 23:46:46.177806 kubelet[2953]: I0709 23:46:46.177799 2953 container_manager_linux.go:300] "Creating device plugin manager" Jul 9 23:46:46.177937 kubelet[2953]: I0709 23:46:46.177926 2953 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:46:46.179068 kubelet[2953]: I0709 23:46:46.179053 2953 kubelet.go:408] "Attempting to sync node with API server" Jul 9 23:46:46.179157 kubelet[2953]: I0709 23:46:46.179146 2953 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 23:46:46.179226 kubelet[2953]: I0709 23:46:46.179217 2953 kubelet.go:314] "Adding apiserver pod source" Jul 9 23:46:46.179295 kubelet[2953]: I0709 23:46:46.179285 2953 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 23:46:46.179989 kubelet[2953]: W0709 23:46:46.179955 2953 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-n-5de0cd73c3&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 9 23:46:46.180037 kubelet[2953]: E0709 23:46:46.179996 2953 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-n-5de0cd73c3&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:46:46.181486 kubelet[2953]: W0709 23:46:46.181448 2953 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 9 23:46:46.181591 kubelet[2953]: E0709 23:46:46.181579 2953 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:46:46.181722 kubelet[2953]: I0709 23:46:46.181710 2953 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 9 23:46:46.182055 kubelet[2953]: I0709 23:46:46.182040 2953 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 9 23:46:46.182145 kubelet[2953]: W0709 23:46:46.182137 2953 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 9 23:46:46.182613 kubelet[2953]: I0709 23:46:46.182593 2953 server.go:1274] "Started kubelet" Jul 9 23:46:46.186840 kubelet[2953]: I0709 23:46:46.186821 2953 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 23:46:46.187963 kubelet[2953]: I0709 23:46:46.187929 2953 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 23:46:46.188845 kubelet[2953]: I0709 23:46:46.188664 2953 server.go:449] "Adding debug handlers to kubelet server" Jul 9 23:46:46.189865 kubelet[2953]: E0709 23:46:46.188737 2953 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.11:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.1.1-n-5de0cd73c3.1850ba09c5c467ba default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.1-n-5de0cd73c3,UID:ci-4344.1.1-n-5de0cd73c3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.1-n-5de0cd73c3,},FirstTimestamp:2025-07-09 23:46:46.182578106 +0000 UTC m=+0.257237330,LastTimestamp:2025-07-09 23:46:46.182578106 +0000 UTC m=+0.257237330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.1-n-5de0cd73c3,}" Jul 9 23:46:46.191504 kubelet[2953]: I0709 23:46:46.189891 2953 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 23:46:46.191578 kubelet[2953]: I0709 23:46:46.191561 2953 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 23:46:46.191603 kubelet[2953]: I0709 23:46:46.190852 2953 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 9 23:46:46.191649 kubelet[2953]: I0709 23:46:46.190109 2953 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 23:46:46.191740 kubelet[2953]: I0709 23:46:46.190866 2953 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 9 23:46:46.192327 kubelet[2953]: I0709 23:46:46.192056 2953 reconciler.go:26] "Reconciler: start to sync state" Jul 9 23:46:46.192327 kubelet[2953]: E0709 23:46:46.190944 2953 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-5de0cd73c3\" not found" Jul 9 23:46:46.192458 kubelet[2953]: E0709 23:46:46.192427 2953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-5de0cd73c3?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="200ms" Jul 9 23:46:46.192458 kubelet[2953]: W0709 23:46:46.192426 2953 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 9 23:46:46.192514 kubelet[2953]: E0709 23:46:46.192463 2953 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:46:46.192825 kubelet[2953]: I0709 23:46:46.192808 2953 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 23:46:46.193599 kubelet[2953]: E0709 23:46:46.193580 2953 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 23:46:46.195227 kubelet[2953]: I0709 23:46:46.194207 2953 factory.go:221] Registration of the containerd container factory successfully Jul 9 23:46:46.195227 kubelet[2953]: I0709 23:46:46.194220 2953 factory.go:221] Registration of the systemd container factory successfully Jul 9 23:46:46.201044 kubelet[2953]: I0709 23:46:46.200979 2953 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 9 23:46:46.201044 kubelet[2953]: I0709 23:46:46.200990 2953 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 9 23:46:46.201044 kubelet[2953]: I0709 23:46:46.201020 2953 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:46:46.206799 kubelet[2953]: I0709 23:46:46.206769 2953 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 9 23:46:46.208027 kubelet[2953]: I0709 23:46:46.208008 2953 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 9 23:46:46.208122 kubelet[2953]: I0709 23:46:46.208110 2953 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 9 23:46:46.208189 kubelet[2953]: I0709 23:46:46.208168 2953 kubelet.go:2321] "Starting kubelet main sync loop" Jul 9 23:46:46.208276 kubelet[2953]: E0709 23:46:46.208261 2953 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 23:46:46.208828 kubelet[2953]: W0709 23:46:46.208792 2953 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 9 23:46:46.208828 kubelet[2953]: E0709 23:46:46.208826 2953 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:46:46.292645 kubelet[2953]: E0709 23:46:46.292613 2953 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-5de0cd73c3\" not found" Jul 9 23:46:46.308859 kubelet[2953]: E0709 23:46:46.308818 2953 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 9 23:46:46.393165 kubelet[2953]: E0709 23:46:46.393134 2953 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-5de0cd73c3\" not found" Jul 9 23:46:46.393510 kubelet[2953]: E0709 23:46:46.393472 2953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-5de0cd73c3?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="400ms" Jul 9 23:46:46.442919 kubelet[2953]: I0709 23:46:46.442893 2953 policy_none.go:49] "None policy: Start" Jul 9 23:46:46.443604 kubelet[2953]: I0709 23:46:46.443581 2953 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 9 23:46:46.443669 kubelet[2953]: I0709 23:46:46.443616 2953 state_mem.go:35] "Initializing new in-memory state store" Jul 9 23:46:46.454214 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 9 23:46:46.465197 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 9 23:46:46.467732 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 9 23:46:46.478726 kubelet[2953]: I0709 23:46:46.478707 2953 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 9 23:46:46.478956 kubelet[2953]: I0709 23:46:46.478943 2953 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 23:46:46.479277 kubelet[2953]: I0709 23:46:46.479160 2953 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 23:46:46.479666 kubelet[2953]: I0709 23:46:46.479481 2953 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 23:46:46.481411 kubelet[2953]: E0709 23:46:46.481395 2953 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.1.1-n-5de0cd73c3\" not found" Jul 9 23:46:46.517424 systemd[1]: Created slice kubepods-burstable-pod566cc66ad54bf39420d5a2b7fd8b1547.slice - libcontainer container kubepods-burstable-pod566cc66ad54bf39420d5a2b7fd8b1547.slice. Jul 9 23:46:46.526942 systemd[1]: Created slice kubepods-burstable-pod99e41063747827cb9803b4a8bf1cd580.slice - libcontainer container kubepods-burstable-pod99e41063747827cb9803b4a8bf1cd580.slice. Jul 9 23:46:46.532886 systemd[1]: Created slice kubepods-burstable-podbd38a039585d61c916c9a1e2549654b4.slice - libcontainer container kubepods-burstable-podbd38a039585d61c916c9a1e2549654b4.slice. Jul 9 23:46:46.581650 kubelet[2953]: I0709 23:46:46.581325 2953 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:46.581650 kubelet[2953]: E0709 23:46:46.581591 2953 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:46.593971 kubelet[2953]: I0709 23:46:46.593951 2953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/566cc66ad54bf39420d5a2b7fd8b1547-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-n-5de0cd73c3\" (UID: \"566cc66ad54bf39420d5a2b7fd8b1547\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:46.594007 kubelet[2953]: I0709 23:46:46.593974 2953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/566cc66ad54bf39420d5a2b7fd8b1547-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-n-5de0cd73c3\" (UID: \"566cc66ad54bf39420d5a2b7fd8b1547\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:46.594007 kubelet[2953]: I0709 23:46:46.593993 2953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/99e41063747827cb9803b4a8bf1cd580-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-5de0cd73c3\" (UID: \"99e41063747827cb9803b4a8bf1cd580\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:46.594007 kubelet[2953]: I0709 23:46:46.594003 2953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd38a039585d61c916c9a1e2549654b4-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-n-5de0cd73c3\" (UID: \"bd38a039585d61c916c9a1e2549654b4\") " pod="kube-system/kube-scheduler-ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:46.594052 kubelet[2953]: I0709 23:46:46.594014 2953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/566cc66ad54bf39420d5a2b7fd8b1547-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-n-5de0cd73c3\" (UID: \"566cc66ad54bf39420d5a2b7fd8b1547\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:46.594052 kubelet[2953]: I0709 23:46:46.594023 2953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/99e41063747827cb9803b4a8bf1cd580-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-5de0cd73c3\" (UID: \"99e41063747827cb9803b4a8bf1cd580\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:46.594052 kubelet[2953]: I0709 23:46:46.594032 2953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/99e41063747827cb9803b4a8bf1cd580-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-n-5de0cd73c3\" (UID: \"99e41063747827cb9803b4a8bf1cd580\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:46.594052 kubelet[2953]: I0709 23:46:46.594040 2953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/99e41063747827cb9803b4a8bf1cd580-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-n-5de0cd73c3\" (UID: \"99e41063747827cb9803b4a8bf1cd580\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:46.594111 kubelet[2953]: I0709 23:46:46.594051 2953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/99e41063747827cb9803b4a8bf1cd580-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-n-5de0cd73c3\" (UID: \"99e41063747827cb9803b4a8bf1cd580\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:46.783095 kubelet[2953]: I0709 23:46:46.783068 2953 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:46.783417 kubelet[2953]: E0709 23:46:46.783383 2953 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:46.793733 kubelet[2953]: E0709 23:46:46.793707 2953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-5de0cd73c3?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="800ms" Jul 9 23:46:46.826771 containerd[1923]: time="2025-07-09T23:46:46.826725197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-n-5de0cd73c3,Uid:566cc66ad54bf39420d5a2b7fd8b1547,Namespace:kube-system,Attempt:0,}" Jul 9 23:46:46.832290 containerd[1923]: time="2025-07-09T23:46:46.832245139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-n-5de0cd73c3,Uid:99e41063747827cb9803b4a8bf1cd580,Namespace:kube-system,Attempt:0,}" Jul 9 23:46:46.834847 containerd[1923]: time="2025-07-09T23:46:46.834824515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-n-5de0cd73c3,Uid:bd38a039585d61c916c9a1e2549654b4,Namespace:kube-system,Attempt:0,}" Jul 9 23:46:47.037112 kubelet[2953]: W0709 23:46:47.036995 2953 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 9 23:46:47.037112 kubelet[2953]: E0709 23:46:47.037042 2953 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:46:47.044545 kubelet[2953]: W0709 23:46:47.044500 2953 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 9 23:46:47.044614 kubelet[2953]: E0709 23:46:47.044552 2953 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:46:47.186095 kubelet[2953]: I0709 23:46:47.185852 2953 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:47.186095 kubelet[2953]: E0709 23:46:47.186083 2953 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:47.330516 kubelet[2953]: W0709 23:46:47.330374 2953 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 9 23:46:47.330516 kubelet[2953]: E0709 23:46:47.330420 2953 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:46:47.594394 kubelet[2953]: E0709 23:46:47.594279 2953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-5de0cd73c3?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="1.6s" Jul 9 23:46:47.606844 kubelet[2953]: W0709 23:46:47.606800 2953 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-n-5de0cd73c3&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 9 23:46:47.606924 kubelet[2953]: E0709 23:46:47.606851 2953 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-n-5de0cd73c3&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:46:47.921562 containerd[1923]: time="2025-07-09T23:46:47.921362329Z" level=info msg="connecting to shim 1f382ba60c75e3d1b77adcaca757887c95890c501118e6921f96b607759d1adb" address="unix:///run/containerd/s/6946325ef8e4c3036c56951b738b65d95d7edf7c26b79710e05725da1806279e" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:46:47.947050 containerd[1923]: time="2025-07-09T23:46:47.946697513Z" level=info msg="connecting to shim fc21aaaff268d98cd118c63c0c62f2f0106bc5b89f18975379cb318fe2db8d9d" address="unix:///run/containerd/s/b0ff0cec802b9e067b93887be76a7a0c6f0584c79c18866ad49b7f7a69104e32" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:46:47.947311 systemd[1]: Started cri-containerd-1f382ba60c75e3d1b77adcaca757887c95890c501118e6921f96b607759d1adb.scope - libcontainer container 1f382ba60c75e3d1b77adcaca757887c95890c501118e6921f96b607759d1adb. Jul 9 23:46:47.951406 containerd[1923]: time="2025-07-09T23:46:47.951383072Z" level=info msg="connecting to shim b1f82f0e0ea4b75f1dfa284c0186c92119acdaa35a66a82525d33e5583ae0fbb" address="unix:///run/containerd/s/117a65da303216995dd9b0abacc45a303687cae3fdd5b799163b1d5cc891c802" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:46:47.977378 systemd[1]: Started cri-containerd-b1f82f0e0ea4b75f1dfa284c0186c92119acdaa35a66a82525d33e5583ae0fbb.scope - libcontainer container b1f82f0e0ea4b75f1dfa284c0186c92119acdaa35a66a82525d33e5583ae0fbb. Jul 9 23:46:47.980873 systemd[1]: Started cri-containerd-fc21aaaff268d98cd118c63c0c62f2f0106bc5b89f18975379cb318fe2db8d9d.scope - libcontainer container fc21aaaff268d98cd118c63c0c62f2f0106bc5b89f18975379cb318fe2db8d9d. Jul 9 23:46:47.988407 kubelet[2953]: I0709 23:46:47.988339 2953 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:47.989169 kubelet[2953]: E0709 23:46:47.989138 2953 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:47.999320 containerd[1923]: time="2025-07-09T23:46:47.999293305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-n-5de0cd73c3,Uid:99e41063747827cb9803b4a8bf1cd580,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f382ba60c75e3d1b77adcaca757887c95890c501118e6921f96b607759d1adb\"" Jul 9 23:46:48.005064 containerd[1923]: time="2025-07-09T23:46:48.003780040Z" level=info msg="CreateContainer within sandbox \"1f382ba60c75e3d1b77adcaca757887c95890c501118e6921f96b607759d1adb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 9 23:46:48.039625 containerd[1923]: time="2025-07-09T23:46:48.039601135Z" level=info msg="Container e2b0c6aacae538ce4d518653c6ca2c06a81dd18aa21ccaabaac7d54c7a0d9fa7: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:46:48.048565 containerd[1923]: time="2025-07-09T23:46:48.048542084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-n-5de0cd73c3,Uid:bd38a039585d61c916c9a1e2549654b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc21aaaff268d98cd118c63c0c62f2f0106bc5b89f18975379cb318fe2db8d9d\"" Jul 9 23:46:48.050304 containerd[1923]: time="2025-07-09T23:46:48.050273885Z" level=info msg="CreateContainer within sandbox \"fc21aaaff268d98cd118c63c0c62f2f0106bc5b89f18975379cb318fe2db8d9d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 9 23:46:48.055657 containerd[1923]: time="2025-07-09T23:46:48.055634181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-n-5de0cd73c3,Uid:566cc66ad54bf39420d5a2b7fd8b1547,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1f82f0e0ea4b75f1dfa284c0186c92119acdaa35a66a82525d33e5583ae0fbb\"" Jul 9 23:46:48.057607 containerd[1923]: time="2025-07-09T23:46:48.057582341Z" level=info msg="CreateContainer within sandbox \"b1f82f0e0ea4b75f1dfa284c0186c92119acdaa35a66a82525d33e5583ae0fbb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 9 23:46:48.065187 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jul 9 23:46:48.097342 containerd[1923]: time="2025-07-09T23:46:48.097312829Z" level=info msg="CreateContainer within sandbox \"1f382ba60c75e3d1b77adcaca757887c95890c501118e6921f96b607759d1adb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e2b0c6aacae538ce4d518653c6ca2c06a81dd18aa21ccaabaac7d54c7a0d9fa7\"" Jul 9 23:46:48.103803 containerd[1923]: time="2025-07-09T23:46:48.103781175Z" level=info msg="StartContainer for \"e2b0c6aacae538ce4d518653c6ca2c06a81dd18aa21ccaabaac7d54c7a0d9fa7\"" Jul 9 23:46:48.104656 containerd[1923]: time="2025-07-09T23:46:48.104602349Z" level=info msg="connecting to shim e2b0c6aacae538ce4d518653c6ca2c06a81dd18aa21ccaabaac7d54c7a0d9fa7" address="unix:///run/containerd/s/6946325ef8e4c3036c56951b738b65d95d7edf7c26b79710e05725da1806279e" protocol=ttrpc version=3 Jul 9 23:46:48.115905 containerd[1923]: time="2025-07-09T23:46:48.115874457Z" level=info msg="Container 3a982ed89d071b6c2c2804bbc03ef4636faf5d78d00bd336f1382dd857f57d33: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:46:48.117295 systemd[1]: Started cri-containerd-e2b0c6aacae538ce4d518653c6ca2c06a81dd18aa21ccaabaac7d54c7a0d9fa7.scope - libcontainer container e2b0c6aacae538ce4d518653c6ca2c06a81dd18aa21ccaabaac7d54c7a0d9fa7. Jul 9 23:46:48.126550 containerd[1923]: time="2025-07-09T23:46:48.126486653Z" level=info msg="Container 224777be5b2612bfd03d7b4bf1ac34f49fe4ca3420e829f15152ca467a38b4cc: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:46:48.152901 containerd[1923]: time="2025-07-09T23:46:48.152644867Z" level=info msg="CreateContainer within sandbox \"fc21aaaff268d98cd118c63c0c62f2f0106bc5b89f18975379cb318fe2db8d9d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3a982ed89d071b6c2c2804bbc03ef4636faf5d78d00bd336f1382dd857f57d33\"" Jul 9 23:46:48.153194 containerd[1923]: time="2025-07-09T23:46:48.153157806Z" level=info msg="StartContainer for \"3a982ed89d071b6c2c2804bbc03ef4636faf5d78d00bd336f1382dd857f57d33\"" Jul 9 23:46:48.153757 containerd[1923]: time="2025-07-09T23:46:48.153438785Z" level=info msg="StartContainer for \"e2b0c6aacae538ce4d518653c6ca2c06a81dd18aa21ccaabaac7d54c7a0d9fa7\" returns successfully" Jul 9 23:46:48.154676 containerd[1923]: time="2025-07-09T23:46:48.154608644Z" level=info msg="connecting to shim 3a982ed89d071b6c2c2804bbc03ef4636faf5d78d00bd336f1382dd857f57d33" address="unix:///run/containerd/s/b0ff0cec802b9e067b93887be76a7a0c6f0584c79c18866ad49b7f7a69104e32" protocol=ttrpc version=3 Jul 9 23:46:48.167246 containerd[1923]: time="2025-07-09T23:46:48.167206274Z" level=info msg="CreateContainer within sandbox \"b1f82f0e0ea4b75f1dfa284c0186c92119acdaa35a66a82525d33e5583ae0fbb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"224777be5b2612bfd03d7b4bf1ac34f49fe4ca3420e829f15152ca467a38b4cc\"" Jul 9 23:46:48.169809 containerd[1923]: time="2025-07-09T23:46:48.169781514Z" level=info msg="StartContainer for \"224777be5b2612bfd03d7b4bf1ac34f49fe4ca3420e829f15152ca467a38b4cc\"" Jul 9 23:46:48.170980 containerd[1923]: time="2025-07-09T23:46:48.170724845Z" level=info msg="connecting to shim 224777be5b2612bfd03d7b4bf1ac34f49fe4ca3420e829f15152ca467a38b4cc" address="unix:///run/containerd/s/117a65da303216995dd9b0abacc45a303687cae3fdd5b799163b1d5cc891c802" protocol=ttrpc version=3 Jul 9 23:46:48.173595 systemd[1]: Started cri-containerd-3a982ed89d071b6c2c2804bbc03ef4636faf5d78d00bd336f1382dd857f57d33.scope - libcontainer container 3a982ed89d071b6c2c2804bbc03ef4636faf5d78d00bd336f1382dd857f57d33. Jul 9 23:46:48.191274 systemd[1]: Started cri-containerd-224777be5b2612bfd03d7b4bf1ac34f49fe4ca3420e829f15152ca467a38b4cc.scope - libcontainer container 224777be5b2612bfd03d7b4bf1ac34f49fe4ca3420e829f15152ca467a38b4cc. Jul 9 23:46:48.238566 containerd[1923]: time="2025-07-09T23:46:48.238536468Z" level=info msg="StartContainer for \"3a982ed89d071b6c2c2804bbc03ef4636faf5d78d00bd336f1382dd857f57d33\" returns successfully" Jul 9 23:46:48.247021 containerd[1923]: time="2025-07-09T23:46:48.247000375Z" level=info msg="StartContainer for \"224777be5b2612bfd03d7b4bf1ac34f49fe4ca3420e829f15152ca467a38b4cc\" returns successfully" Jul 9 23:46:49.207868 kubelet[2953]: E0709 23:46:49.207832 2953 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344.1.1-n-5de0cd73c3\" not found" node="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:49.582200 kubelet[2953]: E0709 23:46:49.582161 2953 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4344.1.1-n-5de0cd73c3" not found Jul 9 23:46:49.591709 kubelet[2953]: I0709 23:46:49.591542 2953 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:49.599711 kubelet[2953]: I0709 23:46:49.599691 2953 kubelet_node_status.go:75] "Successfully registered node" node="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:49.599955 kubelet[2953]: E0709 23:46:49.599931 2953 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4344.1.1-n-5de0cd73c3\": node \"ci-4344.1.1-n-5de0cd73c3\" not found" Jul 9 23:46:49.607616 kubelet[2953]: E0709 23:46:49.607594 2953 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-5de0cd73c3\" not found" Jul 9 23:46:49.708262 kubelet[2953]: E0709 23:46:49.708222 2953 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-5de0cd73c3\" not found" Jul 9 23:46:49.808715 kubelet[2953]: E0709 23:46:49.808675 2953 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-5de0cd73c3\" not found" Jul 9 23:46:49.830866 update_engine[1907]: I20250709 23:46:49.830803 1907 update_attempter.cc:509] Updating boot flags... Jul 9 23:46:49.909609 kubelet[2953]: E0709 23:46:49.909479 2953 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-5de0cd73c3\" not found" Jul 9 23:46:50.010239 kubelet[2953]: E0709 23:46:50.010206 2953 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-5de0cd73c3\" not found" Jul 9 23:46:50.110761 kubelet[2953]: E0709 23:46:50.110721 2953 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-5de0cd73c3\" not found" Jul 9 23:46:50.211529 kubelet[2953]: E0709 23:46:50.211426 2953 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-5de0cd73c3\" not found" Jul 9 23:46:50.312505 kubelet[2953]: E0709 23:46:50.312466 2953 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-5de0cd73c3\" not found" Jul 9 23:46:50.412925 kubelet[2953]: E0709 23:46:50.412888 2953 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-5de0cd73c3\" not found" Jul 9 23:46:50.513533 kubelet[2953]: E0709 23:46:50.513502 2953 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-5de0cd73c3\" not found" Jul 9 23:46:50.614196 kubelet[2953]: E0709 23:46:50.614157 2953 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-5de0cd73c3\" not found" Jul 9 23:46:50.715072 kubelet[2953]: E0709 23:46:50.715043 2953 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-5de0cd73c3\" not found" Jul 9 23:46:51.183116 kubelet[2953]: I0709 23:46:51.182457 2953 apiserver.go:52] "Watching apiserver" Jul 9 23:46:51.192207 kubelet[2953]: I0709 23:46:51.192186 2953 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 9 23:46:51.250243 kubelet[2953]: W0709 23:46:51.250216 2953 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 9 23:46:51.460908 systemd[1]: Reload requested from client PID 3285 ('systemctl') (unit session-9.scope)... Jul 9 23:46:51.460922 systemd[1]: Reloading... Jul 9 23:46:51.526258 zram_generator::config[3331]: No configuration found. Jul 9 23:46:51.593636 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:46:51.684647 systemd[1]: Reloading finished in 223 ms. Jul 9 23:46:51.715333 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:46:51.727777 systemd[1]: kubelet.service: Deactivated successfully. Jul 9 23:46:51.727988 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:51.728035 systemd[1]: kubelet.service: Consumed 500ms CPU time, 125M memory peak. Jul 9 23:46:51.729299 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:46:51.817129 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:51.820330 (kubelet)[3395]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 23:46:51.844279 kubelet[3395]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:46:51.844489 kubelet[3395]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 9 23:46:51.844530 kubelet[3395]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:46:51.844634 kubelet[3395]: I0709 23:46:51.844614 3395 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 23:46:51.848355 kubelet[3395]: I0709 23:46:51.848333 3395 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 9 23:46:51.848355 kubelet[3395]: I0709 23:46:51.848351 3395 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 23:46:51.848510 kubelet[3395]: I0709 23:46:51.848494 3395 server.go:934] "Client rotation is on, will bootstrap in background" Jul 9 23:46:51.849418 kubelet[3395]: I0709 23:46:51.849400 3395 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 9 23:46:51.850850 kubelet[3395]: I0709 23:46:51.850831 3395 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 23:46:51.854084 kubelet[3395]: I0709 23:46:51.854060 3395 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 23:46:51.859417 kubelet[3395]: I0709 23:46:51.859343 3395 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 23:46:51.859484 kubelet[3395]: I0709 23:46:51.859435 3395 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 9 23:46:51.859549 kubelet[3395]: I0709 23:46:51.859525 3395 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 23:46:51.859649 kubelet[3395]: I0709 23:46:51.859545 3395 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-n-5de0cd73c3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 23:46:51.859704 kubelet[3395]: I0709 23:46:51.859653 3395 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 23:46:51.859704 kubelet[3395]: I0709 23:46:51.859659 3395 container_manager_linux.go:300] "Creating device plugin manager" Jul 9 23:46:51.859704 kubelet[3395]: I0709 23:46:51.859684 3395 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:46:51.859756 kubelet[3395]: I0709 23:46:51.859748 3395 kubelet.go:408] "Attempting to sync node with API server" Jul 9 23:46:51.859772 kubelet[3395]: I0709 23:46:51.859757 3395 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 23:46:51.859772 kubelet[3395]: I0709 23:46:51.859769 3395 kubelet.go:314] "Adding apiserver pod source" Jul 9 23:46:51.859800 kubelet[3395]: I0709 23:46:51.859778 3395 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 23:46:51.860815 kubelet[3395]: I0709 23:46:51.860778 3395 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 9 23:46:51.861154 kubelet[3395]: I0709 23:46:51.861139 3395 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 9 23:46:51.861529 kubelet[3395]: I0709 23:46:51.861501 3395 server.go:1274] "Started kubelet" Jul 9 23:46:51.862752 kubelet[3395]: I0709 23:46:51.862733 3395 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 23:46:51.869502 kubelet[3395]: I0709 23:46:51.869480 3395 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 23:46:51.870165 kubelet[3395]: I0709 23:46:51.870070 3395 server.go:449] "Adding debug handlers to kubelet server" Jul 9 23:46:51.870874 kubelet[3395]: I0709 23:46:51.870840 3395 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 23:46:51.871082 kubelet[3395]: I0709 23:46:51.871066 3395 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 23:46:51.871318 kubelet[3395]: I0709 23:46:51.871299 3395 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 23:46:51.872096 kubelet[3395]: I0709 23:46:51.872082 3395 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 9 23:46:51.872670 kubelet[3395]: E0709 23:46:51.872451 3395 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-5de0cd73c3\" not found" Jul 9 23:46:51.875212 kubelet[3395]: I0709 23:46:51.875132 3395 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 9 23:46:51.875292 kubelet[3395]: I0709 23:46:51.875273 3395 factory.go:221] Registration of the systemd container factory successfully Jul 9 23:46:51.875447 kubelet[3395]: I0709 23:46:51.875352 3395 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 23:46:51.875447 kubelet[3395]: I0709 23:46:51.875411 3395 reconciler.go:26] "Reconciler: start to sync state" Jul 9 23:46:51.879502 kubelet[3395]: I0709 23:46:51.879474 3395 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 9 23:46:51.881798 kubelet[3395]: I0709 23:46:51.881762 3395 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 9 23:46:51.881798 kubelet[3395]: I0709 23:46:51.881779 3395 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 9 23:46:51.881989 kubelet[3395]: I0709 23:46:51.881888 3395 kubelet.go:2321] "Starting kubelet main sync loop" Jul 9 23:46:51.881989 kubelet[3395]: E0709 23:46:51.881922 3395 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 23:46:51.887142 kubelet[3395]: I0709 23:46:51.887118 3395 factory.go:221] Registration of the containerd container factory successfully Jul 9 23:46:51.915000 kubelet[3395]: I0709 23:46:51.914979 3395 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 9 23:46:51.915000 kubelet[3395]: I0709 23:46:51.914995 3395 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 9 23:46:51.915082 kubelet[3395]: I0709 23:46:51.915011 3395 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:46:51.915128 kubelet[3395]: I0709 23:46:51.915112 3395 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 9 23:46:51.915148 kubelet[3395]: I0709 23:46:51.915124 3395 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 9 23:46:51.915148 kubelet[3395]: I0709 23:46:51.915136 3395 policy_none.go:49] "None policy: Start" Jul 9 23:46:51.915741 kubelet[3395]: I0709 23:46:51.915720 3395 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 9 23:46:51.915789 kubelet[3395]: I0709 23:46:51.915748 3395 state_mem.go:35] "Initializing new in-memory state store" Jul 9 23:46:51.915857 kubelet[3395]: I0709 23:46:51.915843 3395 state_mem.go:75] "Updated machine memory state" Jul 9 23:46:51.919552 kubelet[3395]: I0709 23:46:51.919535 3395 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 9 23:46:51.919665 kubelet[3395]: I0709 23:46:51.919651 3395 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 23:46:51.919699 kubelet[3395]: I0709 23:46:51.919663 3395 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 23:46:51.920958 kubelet[3395]: I0709 23:46:51.920724 3395 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 23:46:51.996903 kubelet[3395]: W0709 23:46:51.996765 3395 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 9 23:46:51.997324 kubelet[3395]: W0709 23:46:51.997306 3395 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 9 23:46:52.004408 kubelet[3395]: W0709 23:46:52.004387 3395 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 9 23:46:52.004471 kubelet[3395]: E0709 23:46:52.004425 3395 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4344.1.1-n-5de0cd73c3\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:52.022228 kubelet[3395]: I0709 23:46:52.022206 3395 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:52.040136 kubelet[3395]: I0709 23:46:52.040068 3395 kubelet_node_status.go:111] "Node was previously registered" node="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:52.040136 kubelet[3395]: I0709 23:46:52.040120 3395 kubelet_node_status.go:75] "Successfully registered node" node="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:52.076330 kubelet[3395]: I0709 23:46:52.076271 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/99e41063747827cb9803b4a8bf1cd580-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-5de0cd73c3\" (UID: \"99e41063747827cb9803b4a8bf1cd580\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:52.076330 kubelet[3395]: I0709 23:46:52.076319 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/99e41063747827cb9803b4a8bf1cd580-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-n-5de0cd73c3\" (UID: \"99e41063747827cb9803b4a8bf1cd580\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:52.076330 kubelet[3395]: I0709 23:46:52.076334 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd38a039585d61c916c9a1e2549654b4-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-n-5de0cd73c3\" (UID: \"bd38a039585d61c916c9a1e2549654b4\") " pod="kube-system/kube-scheduler-ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:52.076457 kubelet[3395]: I0709 23:46:52.076345 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/566cc66ad54bf39420d5a2b7fd8b1547-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-n-5de0cd73c3\" (UID: \"566cc66ad54bf39420d5a2b7fd8b1547\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:52.076457 kubelet[3395]: I0709 23:46:52.076355 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/566cc66ad54bf39420d5a2b7fd8b1547-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-n-5de0cd73c3\" (UID: \"566cc66ad54bf39420d5a2b7fd8b1547\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:52.076457 kubelet[3395]: I0709 23:46:52.076366 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/99e41063747827cb9803b4a8bf1cd580-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-5de0cd73c3\" (UID: \"99e41063747827cb9803b4a8bf1cd580\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:52.076457 kubelet[3395]: I0709 23:46:52.076374 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/99e41063747827cb9803b4a8bf1cd580-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-n-5de0cd73c3\" (UID: \"99e41063747827cb9803b4a8bf1cd580\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:52.076457 kubelet[3395]: I0709 23:46:52.076383 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/566cc66ad54bf39420d5a2b7fd8b1547-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-n-5de0cd73c3\" (UID: \"566cc66ad54bf39420d5a2b7fd8b1547\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:52.076532 kubelet[3395]: I0709 23:46:52.076392 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/99e41063747827cb9803b4a8bf1cd580-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-n-5de0cd73c3\" (UID: \"99e41063747827cb9803b4a8bf1cd580\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-5de0cd73c3" Jul 9 23:46:52.860856 kubelet[3395]: I0709 23:46:52.860644 3395 apiserver.go:52] "Watching apiserver" Jul 9 23:46:52.875804 kubelet[3395]: I0709 23:46:52.875773 3395 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 9 23:46:52.939671 kubelet[3395]: I0709 23:46:52.939452 3395 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.1.1-n-5de0cd73c3" podStartSLOduration=1.938888784 podStartE2EDuration="1.938888784s" podCreationTimestamp="2025-07-09 23:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:46:52.938919177 +0000 UTC m=+1.115976179" watchObservedRunningTime="2025-07-09 23:46:52.938888784 +0000 UTC m=+1.115945698" Jul 9 23:46:52.939671 kubelet[3395]: I0709 23:46:52.939549 3395 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.1.1-n-5de0cd73c3" podStartSLOduration=1.939545927 podStartE2EDuration="1.939545927s" podCreationTimestamp="2025-07-09 23:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:46:52.925559298 +0000 UTC m=+1.102616220" watchObservedRunningTime="2025-07-09 23:46:52.939545927 +0000 UTC m=+1.116602849" Jul 9 23:46:52.962646 kubelet[3395]: I0709 23:46:52.962606 3395 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-5de0cd73c3" podStartSLOduration=1.9625973989999999 podStartE2EDuration="1.962597399s" podCreationTimestamp="2025-07-09 23:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:46:52.950702058 +0000 UTC m=+1.127759012" watchObservedRunningTime="2025-07-09 23:46:52.962597399 +0000 UTC m=+1.139654313" Jul 9 23:46:57.062606 kubelet[3395]: I0709 23:46:57.062566 3395 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 9 23:46:57.063271 containerd[1923]: time="2025-07-09T23:46:57.063236194Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 9 23:46:57.063684 kubelet[3395]: I0709 23:46:57.063362 3395 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 9 23:46:57.913132 kubelet[3395]: I0709 23:46:57.912556 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/73fb3030-c1ec-4d37-a323-1986e8de9851-kube-proxy\") pod \"kube-proxy-m74fj\" (UID: \"73fb3030-c1ec-4d37-a323-1986e8de9851\") " pod="kube-system/kube-proxy-m74fj" Jul 9 23:46:57.913132 kubelet[3395]: I0709 23:46:57.912594 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73fb3030-c1ec-4d37-a323-1986e8de9851-lib-modules\") pod \"kube-proxy-m74fj\" (UID: \"73fb3030-c1ec-4d37-a323-1986e8de9851\") " pod="kube-system/kube-proxy-m74fj" Jul 9 23:46:57.913132 kubelet[3395]: I0709 23:46:57.912605 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73fb3030-c1ec-4d37-a323-1986e8de9851-xtables-lock\") pod \"kube-proxy-m74fj\" (UID: \"73fb3030-c1ec-4d37-a323-1986e8de9851\") " pod="kube-system/kube-proxy-m74fj" Jul 9 23:46:57.913132 kubelet[3395]: I0709 23:46:57.912614 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-659nt\" (UniqueName: \"kubernetes.io/projected/73fb3030-c1ec-4d37-a323-1986e8de9851-kube-api-access-659nt\") pod \"kube-proxy-m74fj\" (UID: \"73fb3030-c1ec-4d37-a323-1986e8de9851\") " pod="kube-system/kube-proxy-m74fj" Jul 9 23:46:57.912620 systemd[1]: Created slice kubepods-besteffort-pod73fb3030_c1ec_4d37_a323_1986e8de9851.slice - libcontainer container kubepods-besteffort-pod73fb3030_c1ec_4d37_a323_1986e8de9851.slice. Jul 9 23:46:58.140822 systemd[1]: Created slice kubepods-besteffort-podc74bab51_957b_4fe0_8137_360223ed1e80.slice - libcontainer container kubepods-besteffort-podc74bab51_957b_4fe0_8137_360223ed1e80.slice. Jul 9 23:46:58.214717 kubelet[3395]: I0709 23:46:58.214630 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c74bab51-957b-4fe0-8137-360223ed1e80-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-mxf7q\" (UID: \"c74bab51-957b-4fe0-8137-360223ed1e80\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-mxf7q" Jul 9 23:46:58.215020 kubelet[3395]: I0709 23:46:58.214978 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trvtf\" (UniqueName: \"kubernetes.io/projected/c74bab51-957b-4fe0-8137-360223ed1e80-kube-api-access-trvtf\") pod \"tigera-operator-5bf8dfcb4-mxf7q\" (UID: \"c74bab51-957b-4fe0-8137-360223ed1e80\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-mxf7q" Jul 9 23:46:58.220162 containerd[1923]: time="2025-07-09T23:46:58.220131718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m74fj,Uid:73fb3030-c1ec-4d37-a323-1986e8de9851,Namespace:kube-system,Attempt:0,}" Jul 9 23:46:58.301265 containerd[1923]: time="2025-07-09T23:46:58.301189589Z" level=info msg="connecting to shim e4b3ab831a763e80fae54ffa7f0ad04686a81c58b6ca0e91756dadb491d44be4" address="unix:///run/containerd/s/49deed160db167be5a4d863e367e110dfda74e70907fac7703bf4364f6d5f81e" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:46:58.317329 systemd[1]: Started cri-containerd-e4b3ab831a763e80fae54ffa7f0ad04686a81c58b6ca0e91756dadb491d44be4.scope - libcontainer container e4b3ab831a763e80fae54ffa7f0ad04686a81c58b6ca0e91756dadb491d44be4. Jul 9 23:46:58.343215 containerd[1923]: time="2025-07-09T23:46:58.343143210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m74fj,Uid:73fb3030-c1ec-4d37-a323-1986e8de9851,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4b3ab831a763e80fae54ffa7f0ad04686a81c58b6ca0e91756dadb491d44be4\"" Jul 9 23:46:58.345725 containerd[1923]: time="2025-07-09T23:46:58.345686616Z" level=info msg="CreateContainer within sandbox \"e4b3ab831a763e80fae54ffa7f0ad04686a81c58b6ca0e91756dadb491d44be4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 9 23:46:58.381003 containerd[1923]: time="2025-07-09T23:46:58.380943201Z" level=info msg="Container 752a8f0db2cb91ebe02cf7b624c4e2edfca496b3239e0fcb78d69f3cf70c6548: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:46:58.405893 containerd[1923]: time="2025-07-09T23:46:58.405853249Z" level=info msg="CreateContainer within sandbox \"e4b3ab831a763e80fae54ffa7f0ad04686a81c58b6ca0e91756dadb491d44be4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"752a8f0db2cb91ebe02cf7b624c4e2edfca496b3239e0fcb78d69f3cf70c6548\"" Jul 9 23:46:58.406444 containerd[1923]: time="2025-07-09T23:46:58.406310112Z" level=info msg="StartContainer for \"752a8f0db2cb91ebe02cf7b624c4e2edfca496b3239e0fcb78d69f3cf70c6548\"" Jul 9 23:46:58.407652 containerd[1923]: time="2025-07-09T23:46:58.407604828Z" level=info msg="connecting to shim 752a8f0db2cb91ebe02cf7b624c4e2edfca496b3239e0fcb78d69f3cf70c6548" address="unix:///run/containerd/s/49deed160db167be5a4d863e367e110dfda74e70907fac7703bf4364f6d5f81e" protocol=ttrpc version=3 Jul 9 23:46:58.421293 systemd[1]: Started cri-containerd-752a8f0db2cb91ebe02cf7b624c4e2edfca496b3239e0fcb78d69f3cf70c6548.scope - libcontainer container 752a8f0db2cb91ebe02cf7b624c4e2edfca496b3239e0fcb78d69f3cf70c6548. Jul 9 23:46:58.445342 containerd[1923]: time="2025-07-09T23:46:58.445323305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-mxf7q,Uid:c74bab51-957b-4fe0-8137-360223ed1e80,Namespace:tigera-operator,Attempt:0,}" Jul 9 23:46:58.447863 containerd[1923]: time="2025-07-09T23:46:58.447824726Z" level=info msg="StartContainer for \"752a8f0db2cb91ebe02cf7b624c4e2edfca496b3239e0fcb78d69f3cf70c6548\" returns successfully" Jul 9 23:46:58.509585 containerd[1923]: time="2025-07-09T23:46:58.509481241Z" level=info msg="connecting to shim bb34287e3061c9048661ee4d4f4de556808386f7c45c607e85f9a8e8f92fcdc5" address="unix:///run/containerd/s/37b26c18ff5334d8a9063b6ec051c7f9a3b94d8e29d5e66613b3d778fb61b237" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:46:58.528335 systemd[1]: Started cri-containerd-bb34287e3061c9048661ee4d4f4de556808386f7c45c607e85f9a8e8f92fcdc5.scope - libcontainer container bb34287e3061c9048661ee4d4f4de556808386f7c45c607e85f9a8e8f92fcdc5. Jul 9 23:46:58.577125 containerd[1923]: time="2025-07-09T23:46:58.577091911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-mxf7q,Uid:c74bab51-957b-4fe0-8137-360223ed1e80,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bb34287e3061c9048661ee4d4f4de556808386f7c45c607e85f9a8e8f92fcdc5\"" Jul 9 23:46:58.578805 containerd[1923]: time="2025-07-09T23:46:58.578768992Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 9 23:46:58.933305 kubelet[3395]: I0709 23:46:58.932858 3395 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m74fj" podStartSLOduration=1.9328448310000002 podStartE2EDuration="1.932844831s" podCreationTimestamp="2025-07-09 23:46:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:46:58.932813054 +0000 UTC m=+7.109869976" watchObservedRunningTime="2025-07-09 23:46:58.932844831 +0000 UTC m=+7.109901745" Jul 9 23:46:59.022612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1662782862.mount: Deactivated successfully. Jul 9 23:47:00.416603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3593524090.mount: Deactivated successfully. Jul 9 23:47:00.947089 containerd[1923]: time="2025-07-09T23:47:00.947038344Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:00.954090 containerd[1923]: time="2025-07-09T23:47:00.954053871Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 9 23:47:00.960351 containerd[1923]: time="2025-07-09T23:47:00.960319325Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:00.967484 containerd[1923]: time="2025-07-09T23:47:00.967450464Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:00.967905 containerd[1923]: time="2025-07-09T23:47:00.967768434Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.388974825s" Jul 9 23:47:00.967905 containerd[1923]: time="2025-07-09T23:47:00.967794643Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 9 23:47:00.970522 containerd[1923]: time="2025-07-09T23:47:00.970494828Z" level=info msg="CreateContainer within sandbox \"bb34287e3061c9048661ee4d4f4de556808386f7c45c607e85f9a8e8f92fcdc5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 9 23:47:01.024058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1128481164.mount: Deactivated successfully. Jul 9 23:47:01.025344 containerd[1923]: time="2025-07-09T23:47:01.025317321Z" level=info msg="Container e16707c3a1374e6494872f837d2d354521e62915e399a4c8e4f4756eacc2c146: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:47:01.046214 containerd[1923]: time="2025-07-09T23:47:01.046164871Z" level=info msg="CreateContainer within sandbox \"bb34287e3061c9048661ee4d4f4de556808386f7c45c607e85f9a8e8f92fcdc5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e16707c3a1374e6494872f837d2d354521e62915e399a4c8e4f4756eacc2c146\"" Jul 9 23:47:01.046640 containerd[1923]: time="2025-07-09T23:47:01.046613310Z" level=info msg="StartContainer for \"e16707c3a1374e6494872f837d2d354521e62915e399a4c8e4f4756eacc2c146\"" Jul 9 23:47:01.047505 containerd[1923]: time="2025-07-09T23:47:01.047480322Z" level=info msg="connecting to shim e16707c3a1374e6494872f837d2d354521e62915e399a4c8e4f4756eacc2c146" address="unix:///run/containerd/s/37b26c18ff5334d8a9063b6ec051c7f9a3b94d8e29d5e66613b3d778fb61b237" protocol=ttrpc version=3 Jul 9 23:47:01.063290 systemd[1]: Started cri-containerd-e16707c3a1374e6494872f837d2d354521e62915e399a4c8e4f4756eacc2c146.scope - libcontainer container e16707c3a1374e6494872f837d2d354521e62915e399a4c8e4f4756eacc2c146. Jul 9 23:47:01.089880 containerd[1923]: time="2025-07-09T23:47:01.089829788Z" level=info msg="StartContainer for \"e16707c3a1374e6494872f837d2d354521e62915e399a4c8e4f4756eacc2c146\" returns successfully" Jul 9 23:47:01.939998 kubelet[3395]: I0709 23:47:01.939847 3395 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-mxf7q" podStartSLOduration=1.549280981 podStartE2EDuration="3.939834171s" podCreationTimestamp="2025-07-09 23:46:58 +0000 UTC" firstStartedPulling="2025-07-09 23:46:58.578084617 +0000 UTC m=+6.755141531" lastFinishedPulling="2025-07-09 23:47:00.968637807 +0000 UTC m=+9.145694721" observedRunningTime="2025-07-09 23:47:01.939743896 +0000 UTC m=+10.116800810" watchObservedRunningTime="2025-07-09 23:47:01.939834171 +0000 UTC m=+10.116891085" Jul 9 23:47:06.094759 sudo[2386]: pam_unix(sudo:session): session closed for user root Jul 9 23:47:06.170009 sshd[2385]: Connection closed by 10.200.16.10 port 49772 Jul 9 23:47:06.171363 sshd-session[2383]: pam_unix(sshd:session): session closed for user core Jul 9 23:47:06.173654 systemd[1]: session-9.scope: Deactivated successfully. Jul 9 23:47:06.175252 systemd[1]: session-9.scope: Consumed 3.109s CPU time, 225.8M memory peak. Jul 9 23:47:06.176084 systemd[1]: sshd@6-10.200.20.11:22-10.200.16.10:49772.service: Deactivated successfully. Jul 9 23:47:06.182139 systemd-logind[1904]: Session 9 logged out. Waiting for processes to exit. Jul 9 23:47:06.183025 systemd-logind[1904]: Removed session 9. Jul 9 23:47:10.716434 systemd[1]: Created slice kubepods-besteffort-pode2eadf80_05a0_48cd_bad9_28a47e8cfc20.slice - libcontainer container kubepods-besteffort-pode2eadf80_05a0_48cd_bad9_28a47e8cfc20.slice. Jul 9 23:47:10.789802 kubelet[3395]: I0709 23:47:10.789749 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2eadf80-05a0-48cd-bad9-28a47e8cfc20-tigera-ca-bundle\") pod \"calico-typha-7dc6d88854-86rcm\" (UID: \"e2eadf80-05a0-48cd-bad9-28a47e8cfc20\") " pod="calico-system/calico-typha-7dc6d88854-86rcm" Jul 9 23:47:10.790281 kubelet[3395]: I0709 23:47:10.789786 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e2eadf80-05a0-48cd-bad9-28a47e8cfc20-typha-certs\") pod \"calico-typha-7dc6d88854-86rcm\" (UID: \"e2eadf80-05a0-48cd-bad9-28a47e8cfc20\") " pod="calico-system/calico-typha-7dc6d88854-86rcm" Jul 9 23:47:10.790281 kubelet[3395]: I0709 23:47:10.790224 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnqrw\" (UniqueName: \"kubernetes.io/projected/e2eadf80-05a0-48cd-bad9-28a47e8cfc20-kube-api-access-lnqrw\") pod \"calico-typha-7dc6d88854-86rcm\" (UID: \"e2eadf80-05a0-48cd-bad9-28a47e8cfc20\") " pod="calico-system/calico-typha-7dc6d88854-86rcm" Jul 9 23:47:11.006493 systemd[1]: Created slice kubepods-besteffort-pod97b9df83_e40c_4518_88f9_c9755be918ad.slice - libcontainer container kubepods-besteffort-pod97b9df83_e40c_4518_88f9_c9755be918ad.slice. Jul 9 23:47:11.020769 containerd[1923]: time="2025-07-09T23:47:11.020585262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7dc6d88854-86rcm,Uid:e2eadf80-05a0-48cd-bad9-28a47e8cfc20,Namespace:calico-system,Attempt:0,}" Jul 9 23:47:11.092749 kubelet[3395]: I0709 23:47:11.092708 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/97b9df83-e40c-4518-88f9-c9755be918ad-cni-net-dir\") pod \"calico-node-j2xcq\" (UID: \"97b9df83-e40c-4518-88f9-c9755be918ad\") " pod="calico-system/calico-node-j2xcq" Jul 9 23:47:11.092749 kubelet[3395]: I0709 23:47:11.092748 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/97b9df83-e40c-4518-88f9-c9755be918ad-flexvol-driver-host\") pod \"calico-node-j2xcq\" (UID: \"97b9df83-e40c-4518-88f9-c9755be918ad\") " pod="calico-system/calico-node-j2xcq" Jul 9 23:47:11.092749 kubelet[3395]: I0709 23:47:11.092764 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/97b9df83-e40c-4518-88f9-c9755be918ad-node-certs\") pod \"calico-node-j2xcq\" (UID: \"97b9df83-e40c-4518-88f9-c9755be918ad\") " pod="calico-system/calico-node-j2xcq" Jul 9 23:47:11.092902 kubelet[3395]: I0709 23:47:11.092777 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjw77\" (UniqueName: \"kubernetes.io/projected/97b9df83-e40c-4518-88f9-c9755be918ad-kube-api-access-wjw77\") pod \"calico-node-j2xcq\" (UID: \"97b9df83-e40c-4518-88f9-c9755be918ad\") " pod="calico-system/calico-node-j2xcq" Jul 9 23:47:11.092902 kubelet[3395]: I0709 23:47:11.092795 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97b9df83-e40c-4518-88f9-c9755be918ad-lib-modules\") pod \"calico-node-j2xcq\" (UID: \"97b9df83-e40c-4518-88f9-c9755be918ad\") " pod="calico-system/calico-node-j2xcq" Jul 9 23:47:11.092902 kubelet[3395]: I0709 23:47:11.092805 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97b9df83-e40c-4518-88f9-c9755be918ad-tigera-ca-bundle\") pod \"calico-node-j2xcq\" (UID: \"97b9df83-e40c-4518-88f9-c9755be918ad\") " pod="calico-system/calico-node-j2xcq" Jul 9 23:47:11.092902 kubelet[3395]: I0709 23:47:11.092813 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/97b9df83-e40c-4518-88f9-c9755be918ad-var-lib-calico\") pod \"calico-node-j2xcq\" (UID: \"97b9df83-e40c-4518-88f9-c9755be918ad\") " pod="calico-system/calico-node-j2xcq" Jul 9 23:47:11.092902 kubelet[3395]: I0709 23:47:11.092822 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/97b9df83-e40c-4518-88f9-c9755be918ad-var-run-calico\") pod \"calico-node-j2xcq\" (UID: \"97b9df83-e40c-4518-88f9-c9755be918ad\") " pod="calico-system/calico-node-j2xcq" Jul 9 23:47:11.092980 kubelet[3395]: I0709 23:47:11.092833 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/97b9df83-e40c-4518-88f9-c9755be918ad-cni-bin-dir\") pod \"calico-node-j2xcq\" (UID: \"97b9df83-e40c-4518-88f9-c9755be918ad\") " pod="calico-system/calico-node-j2xcq" Jul 9 23:47:11.092980 kubelet[3395]: I0709 23:47:11.092846 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/97b9df83-e40c-4518-88f9-c9755be918ad-cni-log-dir\") pod \"calico-node-j2xcq\" (UID: \"97b9df83-e40c-4518-88f9-c9755be918ad\") " pod="calico-system/calico-node-j2xcq" Jul 9 23:47:11.092980 kubelet[3395]: I0709 23:47:11.092871 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/97b9df83-e40c-4518-88f9-c9755be918ad-policysync\") pod \"calico-node-j2xcq\" (UID: \"97b9df83-e40c-4518-88f9-c9755be918ad\") " pod="calico-system/calico-node-j2xcq" Jul 9 23:47:11.092980 kubelet[3395]: I0709 23:47:11.092879 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97b9df83-e40c-4518-88f9-c9755be918ad-xtables-lock\") pod \"calico-node-j2xcq\" (UID: \"97b9df83-e40c-4518-88f9-c9755be918ad\") " pod="calico-system/calico-node-j2xcq" Jul 9 23:47:11.105114 containerd[1923]: time="2025-07-09T23:47:11.105026970Z" level=info msg="connecting to shim f72cd37a62962d4580aaa691f3fad6b1c536831973c4fb05cd128f87d196fda3" address="unix:///run/containerd/s/7ed6d6176799d78064f29d4794e20aa718799cb36dbc79dbe19f8bd35181cde0" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:47:11.126835 systemd[1]: Started cri-containerd-f72cd37a62962d4580aaa691f3fad6b1c536831973c4fb05cd128f87d196fda3.scope - libcontainer container f72cd37a62962d4580aaa691f3fad6b1c536831973c4fb05cd128f87d196fda3. Jul 9 23:47:11.159426 containerd[1923]: time="2025-07-09T23:47:11.159393649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7dc6d88854-86rcm,Uid:e2eadf80-05a0-48cd-bad9-28a47e8cfc20,Namespace:calico-system,Attempt:0,} returns sandbox id \"f72cd37a62962d4580aaa691f3fad6b1c536831973c4fb05cd128f87d196fda3\"" Jul 9 23:47:11.160885 containerd[1923]: time="2025-07-09T23:47:11.160854074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 9 23:47:11.194373 kubelet[3395]: E0709 23:47:11.194328 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.194373 kubelet[3395]: W0709 23:47:11.194346 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.194373 kubelet[3395]: E0709 23:47:11.194363 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.194705 kubelet[3395]: E0709 23:47:11.194500 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.194705 kubelet[3395]: W0709 23:47:11.194506 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.194705 kubelet[3395]: E0709 23:47:11.194513 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.194705 kubelet[3395]: E0709 23:47:11.194625 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.194705 kubelet[3395]: W0709 23:47:11.194631 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.194705 kubelet[3395]: E0709 23:47:11.194637 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.194799 kubelet[3395]: E0709 23:47:11.194724 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.194799 kubelet[3395]: W0709 23:47:11.194729 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.194799 kubelet[3395]: E0709 23:47:11.194735 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.194841 kubelet[3395]: E0709 23:47:11.194821 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.194841 kubelet[3395]: W0709 23:47:11.194826 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.194841 kubelet[3395]: E0709 23:47:11.194831 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.195549 kubelet[3395]: E0709 23:47:11.194899 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.195549 kubelet[3395]: W0709 23:47:11.194908 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.195549 kubelet[3395]: E0709 23:47:11.194913 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.195549 kubelet[3395]: E0709 23:47:11.194977 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.195549 kubelet[3395]: W0709 23:47:11.194981 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.195549 kubelet[3395]: E0709 23:47:11.194986 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.195549 kubelet[3395]: E0709 23:47:11.195105 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.195549 kubelet[3395]: W0709 23:47:11.195111 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.195549 kubelet[3395]: E0709 23:47:11.195118 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.195549 kubelet[3395]: E0709 23:47:11.195244 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.196447 kubelet[3395]: W0709 23:47:11.195251 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.196447 kubelet[3395]: E0709 23:47:11.195259 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.196447 kubelet[3395]: E0709 23:47:11.195350 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.196447 kubelet[3395]: W0709 23:47:11.195355 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.196447 kubelet[3395]: E0709 23:47:11.195363 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.196447 kubelet[3395]: E0709 23:47:11.195435 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.196447 kubelet[3395]: W0709 23:47:11.195440 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.196447 kubelet[3395]: E0709 23:47:11.195444 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.196447 kubelet[3395]: E0709 23:47:11.195531 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.196447 kubelet[3395]: W0709 23:47:11.195535 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.196595 kubelet[3395]: E0709 23:47:11.195543 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.196595 kubelet[3395]: E0709 23:47:11.196528 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.196595 kubelet[3395]: W0709 23:47:11.196539 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.196595 kubelet[3395]: E0709 23:47:11.196553 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.196672 kubelet[3395]: E0709 23:47:11.196657 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.196712 kubelet[3395]: W0709 23:47:11.196698 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.196975 kubelet[3395]: E0709 23:47:11.196954 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.197520 kubelet[3395]: E0709 23:47:11.197501 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.197520 kubelet[3395]: W0709 23:47:11.197515 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.197842 kubelet[3395]: E0709 23:47:11.197808 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.201337 kubelet[3395]: E0709 23:47:11.201317 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.201337 kubelet[3395]: W0709 23:47:11.201331 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.201963 kubelet[3395]: E0709 23:47:11.201554 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.201963 kubelet[3395]: W0709 23:47:11.201563 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.201963 kubelet[3395]: E0709 23:47:11.201685 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.201963 kubelet[3395]: W0709 23:47:11.201691 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.202271 kubelet[3395]: E0709 23:47:11.202257 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.202357 kubelet[3395]: W0709 23:47:11.202331 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.202513 kubelet[3395]: E0709 23:47:11.202347 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.202513 kubelet[3395]: E0709 23:47:11.202420 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.202513 kubelet[3395]: E0709 23:47:11.202429 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.203252 kubelet[3395]: E0709 23:47:11.203240 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.203330 kubelet[3395]: W0709 23:47:11.203318 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.203379 kubelet[3395]: E0709 23:47:11.203369 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.203424 kubelet[3395]: E0709 23:47:11.203416 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.203607 kubelet[3395]: E0709 23:47:11.203565 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.203665 kubelet[3395]: W0709 23:47:11.203654 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.203725 kubelet[3395]: E0709 23:47:11.203706 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.204163 kubelet[3395]: E0709 23:47:11.204118 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.204163 kubelet[3395]: W0709 23:47:11.204128 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.204163 kubelet[3395]: E0709 23:47:11.204141 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.204286 kubelet[3395]: E0709 23:47:11.204267 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.204286 kubelet[3395]: W0709 23:47:11.204283 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.204349 kubelet[3395]: E0709 23:47:11.204293 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.293726 kubelet[3395]: E0709 23:47:11.293493 3395 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vn5rz" podUID="1652e582-9df7-47e7-ad42-d56cc98d0a95" Jul 9 23:47:11.310326 containerd[1923]: time="2025-07-09T23:47:11.310289123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j2xcq,Uid:97b9df83-e40c-4518-88f9-c9755be918ad,Namespace:calico-system,Attempt:0,}" Jul 9 23:47:11.380606 containerd[1923]: time="2025-07-09T23:47:11.380472367Z" level=info msg="connecting to shim bf19aa5f4e5077f36ed7e40bbac01fc49f987221686c3addf86fcf0569e7cd37" address="unix:///run/containerd/s/8015cfa84f80b0eb589e9241e85f791f72e1979575ae60e084b735b070bdc772" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:47:11.393436 kubelet[3395]: E0709 23:47:11.393414 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.393523 kubelet[3395]: W0709 23:47:11.393431 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.393523 kubelet[3395]: E0709 23:47:11.393462 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.393895 kubelet[3395]: E0709 23:47:11.393881 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.393895 kubelet[3395]: W0709 23:47:11.393893 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.393959 kubelet[3395]: E0709 23:47:11.393903 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.394620 kubelet[3395]: E0709 23:47:11.394604 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.394620 kubelet[3395]: W0709 23:47:11.394616 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.394698 kubelet[3395]: E0709 23:47:11.394628 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.394912 kubelet[3395]: E0709 23:47:11.394894 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.394912 kubelet[3395]: W0709 23:47:11.394908 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.394978 kubelet[3395]: E0709 23:47:11.394917 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.395394 kubelet[3395]: E0709 23:47:11.395374 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.395394 kubelet[3395]: W0709 23:47:11.395386 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.395553 kubelet[3395]: E0709 23:47:11.395399 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.395633 kubelet[3395]: E0709 23:47:11.395617 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.395701 kubelet[3395]: W0709 23:47:11.395633 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.395701 kubelet[3395]: E0709 23:47:11.395642 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.395854 kubelet[3395]: E0709 23:47:11.395831 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.395854 kubelet[3395]: W0709 23:47:11.395841 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.395854 kubelet[3395]: E0709 23:47:11.395849 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.395981 kubelet[3395]: E0709 23:47:11.395970 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.395981 kubelet[3395]: W0709 23:47:11.395978 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.396029 kubelet[3395]: E0709 23:47:11.395985 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.396239 kubelet[3395]: E0709 23:47:11.396080 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.396239 kubelet[3395]: W0709 23:47:11.396101 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.396239 kubelet[3395]: E0709 23:47:11.396108 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.396239 kubelet[3395]: E0709 23:47:11.396206 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.396239 kubelet[3395]: W0709 23:47:11.396211 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.396239 kubelet[3395]: E0709 23:47:11.396217 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.396370 kubelet[3395]: E0709 23:47:11.396323 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.396370 kubelet[3395]: W0709 23:47:11.396333 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.396370 kubelet[3395]: E0709 23:47:11.396338 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.396616 kubelet[3395]: E0709 23:47:11.396434 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.396616 kubelet[3395]: W0709 23:47:11.396441 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.396616 kubelet[3395]: E0709 23:47:11.396448 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.396616 kubelet[3395]: E0709 23:47:11.396560 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.396616 kubelet[3395]: W0709 23:47:11.396566 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.396616 kubelet[3395]: E0709 23:47:11.396571 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.396782 kubelet[3395]: E0709 23:47:11.396662 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.396782 kubelet[3395]: W0709 23:47:11.396666 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.396782 kubelet[3395]: E0709 23:47:11.396671 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.396782 kubelet[3395]: E0709 23:47:11.396753 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.396782 kubelet[3395]: W0709 23:47:11.396768 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.396782 kubelet[3395]: E0709 23:47:11.396773 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.397091 kubelet[3395]: E0709 23:47:11.396860 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.397091 kubelet[3395]: W0709 23:47:11.396870 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.397091 kubelet[3395]: E0709 23:47:11.396878 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.397091 kubelet[3395]: E0709 23:47:11.397038 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.397091 kubelet[3395]: W0709 23:47:11.397045 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.397337 kubelet[3395]: E0709 23:47:11.397186 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.397337 kubelet[3395]: E0709 23:47:11.397314 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.397337 kubelet[3395]: W0709 23:47:11.397321 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.397337 kubelet[3395]: E0709 23:47:11.397334 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.397497 kubelet[3395]: E0709 23:47:11.397474 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.397497 kubelet[3395]: W0709 23:47:11.397483 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.397497 kubelet[3395]: E0709 23:47:11.397491 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.397648 kubelet[3395]: E0709 23:47:11.397607 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.397648 kubelet[3395]: W0709 23:47:11.397613 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.397648 kubelet[3395]: E0709 23:47:11.397619 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.397808 kubelet[3395]: E0709 23:47:11.397796 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.397808 kubelet[3395]: W0709 23:47:11.397805 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.397902 kubelet[3395]: E0709 23:47:11.397811 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.397902 kubelet[3395]: I0709 23:47:11.397828 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1652e582-9df7-47e7-ad42-d56cc98d0a95-socket-dir\") pod \"csi-node-driver-vn5rz\" (UID: \"1652e582-9df7-47e7-ad42-d56cc98d0a95\") " pod="calico-system/csi-node-driver-vn5rz" Jul 9 23:47:11.397952 kubelet[3395]: E0709 23:47:11.397940 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.397952 kubelet[3395]: W0709 23:47:11.397946 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.397952 kubelet[3395]: E0709 23:47:11.397955 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.397952 kubelet[3395]: I0709 23:47:11.397965 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1652e582-9df7-47e7-ad42-d56cc98d0a95-varrun\") pod \"csi-node-driver-vn5rz\" (UID: \"1652e582-9df7-47e7-ad42-d56cc98d0a95\") " pod="calico-system/csi-node-driver-vn5rz" Jul 9 23:47:11.398238 kubelet[3395]: E0709 23:47:11.398223 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.398375 kubelet[3395]: W0709 23:47:11.398277 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.398375 kubelet[3395]: E0709 23:47:11.398294 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.398307 systemd[1]: Started cri-containerd-bf19aa5f4e5077f36ed7e40bbac01fc49f987221686c3addf86fcf0569e7cd37.scope - libcontainer container bf19aa5f4e5077f36ed7e40bbac01fc49f987221686c3addf86fcf0569e7cd37. Jul 9 23:47:11.398730 kubelet[3395]: E0709 23:47:11.398662 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.398730 kubelet[3395]: W0709 23:47:11.398690 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.398730 kubelet[3395]: E0709 23:47:11.398702 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.399121 kubelet[3395]: E0709 23:47:11.399108 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.399483 kubelet[3395]: W0709 23:47:11.399193 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.399483 kubelet[3395]: E0709 23:47:11.399212 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.399483 kubelet[3395]: I0709 23:47:11.399229 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1652e582-9df7-47e7-ad42-d56cc98d0a95-registration-dir\") pod \"csi-node-driver-vn5rz\" (UID: \"1652e582-9df7-47e7-ad42-d56cc98d0a95\") " pod="calico-system/csi-node-driver-vn5rz" Jul 9 23:47:11.400048 kubelet[3395]: E0709 23:47:11.399775 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.400199 kubelet[3395]: W0709 23:47:11.400130 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.400306 kubelet[3395]: E0709 23:47:11.400286 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.400456 kubelet[3395]: I0709 23:47:11.400312 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1652e582-9df7-47e7-ad42-d56cc98d0a95-kubelet-dir\") pod \"csi-node-driver-vn5rz\" (UID: \"1652e582-9df7-47e7-ad42-d56cc98d0a95\") " pod="calico-system/csi-node-driver-vn5rz" Jul 9 23:47:11.400702 kubelet[3395]: E0709 23:47:11.400684 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.400988 kubelet[3395]: W0709 23:47:11.400970 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.401445 kubelet[3395]: E0709 23:47:11.401374 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.401772 kubelet[3395]: E0709 23:47:11.401758 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.402150 kubelet[3395]: W0709 23:47:11.402006 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.402150 kubelet[3395]: E0709 23:47:11.402044 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.402617 kubelet[3395]: E0709 23:47:11.402547 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.402973 kubelet[3395]: W0709 23:47:11.402954 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.403122 kubelet[3395]: E0709 23:47:11.403049 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.403122 kubelet[3395]: I0709 23:47:11.403078 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2cgs\" (UniqueName: \"kubernetes.io/projected/1652e582-9df7-47e7-ad42-d56cc98d0a95-kube-api-access-v2cgs\") pod \"csi-node-driver-vn5rz\" (UID: \"1652e582-9df7-47e7-ad42-d56cc98d0a95\") " pod="calico-system/csi-node-driver-vn5rz" Jul 9 23:47:11.403508 kubelet[3395]: E0709 23:47:11.403466 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.403508 kubelet[3395]: W0709 23:47:11.403479 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.403508 kubelet[3395]: E0709 23:47:11.403504 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.404272 kubelet[3395]: E0709 23:47:11.404162 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.404695 kubelet[3395]: W0709 23:47:11.404418 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.404695 kubelet[3395]: E0709 23:47:11.404437 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.405286 kubelet[3395]: E0709 23:47:11.405270 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.405470 kubelet[3395]: W0709 23:47:11.405363 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.405470 kubelet[3395]: E0709 23:47:11.405387 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.405978 kubelet[3395]: E0709 23:47:11.405965 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.406236 kubelet[3395]: W0709 23:47:11.406219 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.406529 kubelet[3395]: E0709 23:47:11.406439 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.406784 kubelet[3395]: E0709 23:47:11.406703 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.406983 kubelet[3395]: W0709 23:47:11.406966 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.407127 kubelet[3395]: E0709 23:47:11.407046 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.407308 kubelet[3395]: E0709 23:47:11.407297 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.407496 kubelet[3395]: W0709 23:47:11.407430 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.407496 kubelet[3395]: E0709 23:47:11.407450 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.434834 containerd[1923]: time="2025-07-09T23:47:11.434800516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j2xcq,Uid:97b9df83-e40c-4518-88f9-c9755be918ad,Namespace:calico-system,Attempt:0,} returns sandbox id \"bf19aa5f4e5077f36ed7e40bbac01fc49f987221686c3addf86fcf0569e7cd37\"" Jul 9 23:47:11.508188 kubelet[3395]: E0709 23:47:11.507999 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.508188 kubelet[3395]: W0709 23:47:11.508151 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.508188 kubelet[3395]: E0709 23:47:11.508168 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.508692 kubelet[3395]: E0709 23:47:11.508634 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.508903 kubelet[3395]: W0709 23:47:11.508775 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.508903 kubelet[3395]: E0709 23:47:11.508798 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.509283 kubelet[3395]: E0709 23:47:11.509212 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.509283 kubelet[3395]: W0709 23:47:11.509224 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.509283 kubelet[3395]: E0709 23:47:11.509237 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.509858 kubelet[3395]: E0709 23:47:11.509792 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.510016 kubelet[3395]: W0709 23:47:11.509949 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.510016 kubelet[3395]: E0709 23:47:11.509972 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.510216 kubelet[3395]: E0709 23:47:11.510197 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.510216 kubelet[3395]: W0709 23:47:11.510213 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.510369 kubelet[3395]: E0709 23:47:11.510232 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.510552 kubelet[3395]: E0709 23:47:11.510521 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.510552 kubelet[3395]: W0709 23:47:11.510533 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.510789 kubelet[3395]: E0709 23:47:11.510703 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.510980 kubelet[3395]: E0709 23:47:11.510963 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.510980 kubelet[3395]: W0709 23:47:11.510976 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.511153 kubelet[3395]: E0709 23:47:11.511137 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.511439 kubelet[3395]: E0709 23:47:11.511411 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.511439 kubelet[3395]: W0709 23:47:11.511426 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.511439 kubelet[3395]: E0709 23:47:11.511439 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.512165 kubelet[3395]: E0709 23:47:11.512149 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.512165 kubelet[3395]: W0709 23:47:11.512162 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.512318 kubelet[3395]: E0709 23:47:11.512212 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.512517 kubelet[3395]: E0709 23:47:11.512496 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.512517 kubelet[3395]: W0709 23:47:11.512508 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.512517 kubelet[3395]: E0709 23:47:11.512532 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.513051 kubelet[3395]: E0709 23:47:11.513035 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.513051 kubelet[3395]: W0709 23:47:11.513047 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.513162 kubelet[3395]: E0709 23:47:11.513100 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.513352 kubelet[3395]: E0709 23:47:11.513334 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.513352 kubelet[3395]: W0709 23:47:11.513345 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.513463 kubelet[3395]: E0709 23:47:11.513393 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.513926 kubelet[3395]: E0709 23:47:11.513905 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.513926 kubelet[3395]: W0709 23:47:11.513921 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.514054 kubelet[3395]: E0709 23:47:11.513975 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.514214 kubelet[3395]: E0709 23:47:11.514199 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.514214 kubelet[3395]: W0709 23:47:11.514211 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.514420 kubelet[3395]: E0709 23:47:11.514265 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.514554 kubelet[3395]: E0709 23:47:11.514538 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.514554 kubelet[3395]: W0709 23:47:11.514550 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.514654 kubelet[3395]: E0709 23:47:11.514599 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.514784 kubelet[3395]: E0709 23:47:11.514771 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.514784 kubelet[3395]: W0709 23:47:11.514782 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.514899 kubelet[3395]: E0709 23:47:11.514825 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.515031 kubelet[3395]: E0709 23:47:11.515017 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.515031 kubelet[3395]: W0709 23:47:11.515029 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.515154 kubelet[3395]: E0709 23:47:11.515042 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.515257 kubelet[3395]: E0709 23:47:11.515243 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.515257 kubelet[3395]: W0709 23:47:11.515255 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.515310 kubelet[3395]: E0709 23:47:11.515269 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.516268 kubelet[3395]: E0709 23:47:11.516252 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.516268 kubelet[3395]: W0709 23:47:11.516264 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.516348 kubelet[3395]: E0709 23:47:11.516279 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.516582 kubelet[3395]: E0709 23:47:11.516566 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.516582 kubelet[3395]: W0709 23:47:11.516576 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.516582 kubelet[3395]: E0709 23:47:11.516587 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.517514 kubelet[3395]: E0709 23:47:11.517497 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.517514 kubelet[3395]: W0709 23:47:11.517509 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.517739 kubelet[3395]: E0709 23:47:11.517555 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.517833 kubelet[3395]: E0709 23:47:11.517819 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.517833 kubelet[3395]: W0709 23:47:11.517830 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.518213 kubelet[3395]: E0709 23:47:11.517891 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.518213 kubelet[3395]: E0709 23:47:11.518138 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.518213 kubelet[3395]: W0709 23:47:11.518147 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.518408 kubelet[3395]: E0709 23:47:11.518197 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.518408 kubelet[3395]: E0709 23:47:11.518380 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.518408 kubelet[3395]: W0709 23:47:11.518390 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.518408 kubelet[3395]: E0709 23:47:11.518404 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.518718 kubelet[3395]: E0709 23:47:11.518697 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.518718 kubelet[3395]: W0709 23:47:11.518708 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.518718 kubelet[3395]: E0709 23:47:11.518716 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:11.545220 kubelet[3395]: E0709 23:47:11.545098 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:11.545220 kubelet[3395]: W0709 23:47:11.545110 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:11.545220 kubelet[3395]: E0709 23:47:11.545121 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:12.706697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount532083481.mount: Deactivated successfully. Jul 9 23:47:12.882421 kubelet[3395]: E0709 23:47:12.882380 3395 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vn5rz" podUID="1652e582-9df7-47e7-ad42-d56cc98d0a95" Jul 9 23:47:13.120441 containerd[1923]: time="2025-07-09T23:47:13.120395976Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:13.124199 containerd[1923]: time="2025-07-09T23:47:13.124143614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 9 23:47:13.128306 containerd[1923]: time="2025-07-09T23:47:13.128258305Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:13.135371 containerd[1923]: time="2025-07-09T23:47:13.135325695Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:13.135881 containerd[1923]: time="2025-07-09T23:47:13.135677179Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.97470082s" Jul 9 23:47:13.135881 containerd[1923]: time="2025-07-09T23:47:13.135699899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 9 23:47:13.136369 containerd[1923]: time="2025-07-09T23:47:13.136350137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 9 23:47:13.145812 containerd[1923]: time="2025-07-09T23:47:13.145781167Z" level=info msg="CreateContainer within sandbox \"f72cd37a62962d4580aaa691f3fad6b1c536831973c4fb05cd128f87d196fda3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 9 23:47:13.207133 containerd[1923]: time="2025-07-09T23:47:13.206650153Z" level=info msg="Container 775d313c9c2f5c174ff14dc00e1ffda9b80743135653fd7eff07b71537e2311b: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:47:13.240657 containerd[1923]: time="2025-07-09T23:47:13.240622241Z" level=info msg="CreateContainer within sandbox \"f72cd37a62962d4580aaa691f3fad6b1c536831973c4fb05cd128f87d196fda3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"775d313c9c2f5c174ff14dc00e1ffda9b80743135653fd7eff07b71537e2311b\"" Jul 9 23:47:13.241504 containerd[1923]: time="2025-07-09T23:47:13.241478582Z" level=info msg="StartContainer for \"775d313c9c2f5c174ff14dc00e1ffda9b80743135653fd7eff07b71537e2311b\"" Jul 9 23:47:13.242205 containerd[1923]: time="2025-07-09T23:47:13.242157373Z" level=info msg="connecting to shim 775d313c9c2f5c174ff14dc00e1ffda9b80743135653fd7eff07b71537e2311b" address="unix:///run/containerd/s/7ed6d6176799d78064f29d4794e20aa718799cb36dbc79dbe19f8bd35181cde0" protocol=ttrpc version=3 Jul 9 23:47:13.262276 systemd[1]: Started cri-containerd-775d313c9c2f5c174ff14dc00e1ffda9b80743135653fd7eff07b71537e2311b.scope - libcontainer container 775d313c9c2f5c174ff14dc00e1ffda9b80743135653fd7eff07b71537e2311b. Jul 9 23:47:13.292158 containerd[1923]: time="2025-07-09T23:47:13.292023284Z" level=info msg="StartContainer for \"775d313c9c2f5c174ff14dc00e1ffda9b80743135653fd7eff07b71537e2311b\" returns successfully" Jul 9 23:47:13.967530 kubelet[3395]: I0709 23:47:13.967380 3395 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7dc6d88854-86rcm" podStartSLOduration=1.9913589489999999 podStartE2EDuration="3.967349397s" podCreationTimestamp="2025-07-09 23:47:10 +0000 UTC" firstStartedPulling="2025-07-09 23:47:11.160241661 +0000 UTC m=+19.337298575" lastFinishedPulling="2025-07-09 23:47:13.136232109 +0000 UTC m=+21.313289023" observedRunningTime="2025-07-09 23:47:13.966616869 +0000 UTC m=+22.143673815" watchObservedRunningTime="2025-07-09 23:47:13.967349397 +0000 UTC m=+22.144406319" Jul 9 23:47:14.014659 kubelet[3395]: E0709 23:47:14.014631 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.014659 kubelet[3395]: W0709 23:47:14.014652 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.014784 kubelet[3395]: E0709 23:47:14.014666 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.014804 kubelet[3395]: E0709 23:47:14.014789 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.014804 kubelet[3395]: W0709 23:47:14.014795 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.014804 kubelet[3395]: E0709 23:47:14.014802 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.014906 kubelet[3395]: E0709 23:47:14.014898 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.014936 kubelet[3395]: W0709 23:47:14.014912 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.014936 kubelet[3395]: E0709 23:47:14.014919 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.015016 kubelet[3395]: E0709 23:47:14.015008 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.015016 kubelet[3395]: W0709 23:47:14.015014 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.015068 kubelet[3395]: E0709 23:47:14.015019 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.015134 kubelet[3395]: E0709 23:47:14.015125 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.015134 kubelet[3395]: W0709 23:47:14.015132 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.015184 kubelet[3395]: E0709 23:47:14.015137 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.015238 kubelet[3395]: E0709 23:47:14.015230 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.015238 kubelet[3395]: W0709 23:47:14.015236 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.015286 kubelet[3395]: E0709 23:47:14.015241 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.015339 kubelet[3395]: E0709 23:47:14.015323 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.015339 kubelet[3395]: W0709 23:47:14.015336 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.015383 kubelet[3395]: E0709 23:47:14.015341 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.015426 kubelet[3395]: E0709 23:47:14.015419 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.015426 kubelet[3395]: W0709 23:47:14.015424 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.015470 kubelet[3395]: E0709 23:47:14.015429 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.015524 kubelet[3395]: E0709 23:47:14.015516 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.015524 kubelet[3395]: W0709 23:47:14.015521 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.015565 kubelet[3395]: E0709 23:47:14.015527 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.015616 kubelet[3395]: E0709 23:47:14.015606 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.015616 kubelet[3395]: W0709 23:47:14.015612 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.015616 kubelet[3395]: E0709 23:47:14.015617 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.015711 kubelet[3395]: E0709 23:47:14.015701 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.015711 kubelet[3395]: W0709 23:47:14.015708 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.015749 kubelet[3395]: E0709 23:47:14.015713 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.015801 kubelet[3395]: E0709 23:47:14.015790 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.015801 kubelet[3395]: W0709 23:47:14.015796 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.015801 kubelet[3395]: E0709 23:47:14.015801 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.015899 kubelet[3395]: E0709 23:47:14.015888 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.015899 kubelet[3395]: W0709 23:47:14.015894 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.015899 kubelet[3395]: E0709 23:47:14.015899 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.015983 kubelet[3395]: E0709 23:47:14.015972 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.015983 kubelet[3395]: W0709 23:47:14.015978 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.015983 kubelet[3395]: E0709 23:47:14.015983 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.016073 kubelet[3395]: E0709 23:47:14.016058 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.016073 kubelet[3395]: W0709 23:47:14.016063 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.016073 kubelet[3395]: E0709 23:47:14.016073 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.027375 kubelet[3395]: E0709 23:47:14.027323 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.027375 kubelet[3395]: W0709 23:47:14.027336 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.027375 kubelet[3395]: E0709 23:47:14.027345 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.027602 kubelet[3395]: E0709 23:47:14.027473 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.027602 kubelet[3395]: W0709 23:47:14.027479 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.027602 kubelet[3395]: E0709 23:47:14.027489 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.027602 kubelet[3395]: E0709 23:47:14.027598 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.027602 kubelet[3395]: W0709 23:47:14.027603 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.027739 kubelet[3395]: E0709 23:47:14.027612 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.027739 kubelet[3395]: E0709 23:47:14.027737 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.027739 kubelet[3395]: W0709 23:47:14.027742 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.027739 kubelet[3395]: E0709 23:47:14.027751 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.028006 kubelet[3395]: E0709 23:47:14.027992 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.028069 kubelet[3395]: W0709 23:47:14.028059 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.028166 kubelet[3395]: E0709 23:47:14.028116 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.028379 kubelet[3395]: E0709 23:47:14.028336 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.028379 kubelet[3395]: W0709 23:47:14.028346 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.028379 kubelet[3395]: E0709 23:47:14.028360 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.028637 kubelet[3395]: E0709 23:47:14.028600 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.028637 kubelet[3395]: W0709 23:47:14.028611 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.028637 kubelet[3395]: E0709 23:47:14.028624 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.028788 kubelet[3395]: E0709 23:47:14.028772 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.028788 kubelet[3395]: W0709 23:47:14.028782 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.028845 kubelet[3395]: E0709 23:47:14.028793 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.028962 kubelet[3395]: E0709 23:47:14.028879 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.028962 kubelet[3395]: W0709 23:47:14.028885 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.028962 kubelet[3395]: E0709 23:47:14.028890 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.029090 kubelet[3395]: E0709 23:47:14.029080 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.029139 kubelet[3395]: W0709 23:47:14.029129 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.029213 kubelet[3395]: E0709 23:47:14.029202 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.029453 kubelet[3395]: E0709 23:47:14.029383 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.029453 kubelet[3395]: W0709 23:47:14.029393 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.029453 kubelet[3395]: E0709 23:47:14.029405 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.029683 kubelet[3395]: E0709 23:47:14.029672 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.029863 kubelet[3395]: W0709 23:47:14.029710 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.029863 kubelet[3395]: E0709 23:47:14.029728 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.029902 kubelet[3395]: E0709 23:47:14.029884 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.029902 kubelet[3395]: W0709 23:47:14.029891 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.029902 kubelet[3395]: E0709 23:47:14.029898 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.030159 kubelet[3395]: E0709 23:47:14.030148 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.030273 kubelet[3395]: W0709 23:47:14.030259 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.030338 kubelet[3395]: E0709 23:47:14.030330 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.030592 kubelet[3395]: E0709 23:47:14.030533 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.030592 kubelet[3395]: W0709 23:47:14.030543 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.030592 kubelet[3395]: E0709 23:47:14.030555 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.030889 kubelet[3395]: E0709 23:47:14.030794 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.030889 kubelet[3395]: W0709 23:47:14.030806 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.030889 kubelet[3395]: E0709 23:47:14.030815 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.031043 kubelet[3395]: E0709 23:47:14.031033 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.031201 kubelet[3395]: W0709 23:47:14.031101 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.031201 kubelet[3395]: E0709 23:47:14.031115 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.031899 kubelet[3395]: E0709 23:47:14.031852 3395 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 23:47:14.031899 kubelet[3395]: W0709 23:47:14.031865 3395 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 23:47:14.031899 kubelet[3395]: E0709 23:47:14.031875 3395 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 23:47:14.524771 containerd[1923]: time="2025-07-09T23:47:14.524314636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:14.529020 containerd[1923]: time="2025-07-09T23:47:14.528996842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 9 23:47:14.534204 containerd[1923]: time="2025-07-09T23:47:14.534183305Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:14.544848 containerd[1923]: time="2025-07-09T23:47:14.544814951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:14.545249 containerd[1923]: time="2025-07-09T23:47:14.545216084Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.408833498s" Jul 9 23:47:14.545249 containerd[1923]: time="2025-07-09T23:47:14.545242405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 9 23:47:14.548733 containerd[1923]: time="2025-07-09T23:47:14.548707882Z" level=info msg="CreateContainer within sandbox \"bf19aa5f4e5077f36ed7e40bbac01fc49f987221686c3addf86fcf0569e7cd37\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 9 23:47:14.578725 containerd[1923]: time="2025-07-09T23:47:14.578687156Z" level=info msg="Container 0f3a4f9184abdbea7b6b0d3b5676691cc9b2bed65c829f91f32b75b660dd96b1: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:47:14.606869 containerd[1923]: time="2025-07-09T23:47:14.606830223Z" level=info msg="CreateContainer within sandbox \"bf19aa5f4e5077f36ed7e40bbac01fc49f987221686c3addf86fcf0569e7cd37\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0f3a4f9184abdbea7b6b0d3b5676691cc9b2bed65c829f91f32b75b660dd96b1\"" Jul 9 23:47:14.607447 containerd[1923]: time="2025-07-09T23:47:14.607243981Z" level=info msg="StartContainer for \"0f3a4f9184abdbea7b6b0d3b5676691cc9b2bed65c829f91f32b75b660dd96b1\"" Jul 9 23:47:14.608317 containerd[1923]: time="2025-07-09T23:47:14.608290953Z" level=info msg="connecting to shim 0f3a4f9184abdbea7b6b0d3b5676691cc9b2bed65c829f91f32b75b660dd96b1" address="unix:///run/containerd/s/8015cfa84f80b0eb589e9241e85f791f72e1979575ae60e084b735b070bdc772" protocol=ttrpc version=3 Jul 9 23:47:14.627294 systemd[1]: Started cri-containerd-0f3a4f9184abdbea7b6b0d3b5676691cc9b2bed65c829f91f32b75b660dd96b1.scope - libcontainer container 0f3a4f9184abdbea7b6b0d3b5676691cc9b2bed65c829f91f32b75b660dd96b1. Jul 9 23:47:14.662874 containerd[1923]: time="2025-07-09T23:47:14.662824725Z" level=info msg="StartContainer for \"0f3a4f9184abdbea7b6b0d3b5676691cc9b2bed65c829f91f32b75b660dd96b1\" returns successfully" Jul 9 23:47:14.666243 systemd[1]: cri-containerd-0f3a4f9184abdbea7b6b0d3b5676691cc9b2bed65c829f91f32b75b660dd96b1.scope: Deactivated successfully. Jul 9 23:47:14.671479 containerd[1923]: time="2025-07-09T23:47:14.671421423Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f3a4f9184abdbea7b6b0d3b5676691cc9b2bed65c829f91f32b75b660dd96b1\" id:\"0f3a4f9184abdbea7b6b0d3b5676691cc9b2bed65c829f91f32b75b660dd96b1\" pid:4077 exited_at:{seconds:1752104834 nanos:669834833}" Jul 9 23:47:14.671479 containerd[1923]: time="2025-07-09T23:47:14.671424799Z" level=info msg="received exit event container_id:\"0f3a4f9184abdbea7b6b0d3b5676691cc9b2bed65c829f91f32b75b660dd96b1\" id:\"0f3a4f9184abdbea7b6b0d3b5676691cc9b2bed65c829f91f32b75b660dd96b1\" pid:4077 exited_at:{seconds:1752104834 nanos:669834833}" Jul 9 23:47:14.692329 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f3a4f9184abdbea7b6b0d3b5676691cc9b2bed65c829f91f32b75b660dd96b1-rootfs.mount: Deactivated successfully. Jul 9 23:47:14.882930 kubelet[3395]: E0709 23:47:14.882811 3395 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vn5rz" podUID="1652e582-9df7-47e7-ad42-d56cc98d0a95" Jul 9 23:47:14.958207 kubelet[3395]: I0709 23:47:14.957693 3395 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 23:47:15.962550 containerd[1923]: time="2025-07-09T23:47:15.962503587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 9 23:47:16.882744 kubelet[3395]: E0709 23:47:16.882708 3395 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vn5rz" podUID="1652e582-9df7-47e7-ad42-d56cc98d0a95" Jul 9 23:47:18.228458 containerd[1923]: time="2025-07-09T23:47:18.227981566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:18.232045 containerd[1923]: time="2025-07-09T23:47:18.232021449Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 9 23:47:18.236798 containerd[1923]: time="2025-07-09T23:47:18.236777754Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:18.241201 containerd[1923]: time="2025-07-09T23:47:18.241171111Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:18.241552 containerd[1923]: time="2025-07-09T23:47:18.241474769Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.278937012s" Jul 9 23:47:18.241552 containerd[1923]: time="2025-07-09T23:47:18.241501249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 9 23:47:18.244031 containerd[1923]: time="2025-07-09T23:47:18.243816088Z" level=info msg="CreateContainer within sandbox \"bf19aa5f4e5077f36ed7e40bbac01fc49f987221686c3addf86fcf0569e7cd37\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 9 23:47:18.284863 containerd[1923]: time="2025-07-09T23:47:18.283309282Z" level=info msg="Container 7276bc65b27cfee4943eff4812fa9eae37a6657226d49dd76a9167b15838b889: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:47:18.311814 containerd[1923]: time="2025-07-09T23:47:18.311773484Z" level=info msg="CreateContainer within sandbox \"bf19aa5f4e5077f36ed7e40bbac01fc49f987221686c3addf86fcf0569e7cd37\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7276bc65b27cfee4943eff4812fa9eae37a6657226d49dd76a9167b15838b889\"" Jul 9 23:47:18.312251 containerd[1923]: time="2025-07-09T23:47:18.312141367Z" level=info msg="StartContainer for \"7276bc65b27cfee4943eff4812fa9eae37a6657226d49dd76a9167b15838b889\"" Jul 9 23:47:18.313160 containerd[1923]: time="2025-07-09T23:47:18.313131366Z" level=info msg="connecting to shim 7276bc65b27cfee4943eff4812fa9eae37a6657226d49dd76a9167b15838b889" address="unix:///run/containerd/s/8015cfa84f80b0eb589e9241e85f791f72e1979575ae60e084b735b070bdc772" protocol=ttrpc version=3 Jul 9 23:47:18.328292 systemd[1]: Started cri-containerd-7276bc65b27cfee4943eff4812fa9eae37a6657226d49dd76a9167b15838b889.scope - libcontainer container 7276bc65b27cfee4943eff4812fa9eae37a6657226d49dd76a9167b15838b889. Jul 9 23:47:18.399527 containerd[1923]: time="2025-07-09T23:47:18.399487346Z" level=info msg="StartContainer for \"7276bc65b27cfee4943eff4812fa9eae37a6657226d49dd76a9167b15838b889\" returns successfully" Jul 9 23:47:18.883231 kubelet[3395]: E0709 23:47:18.883182 3395 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vn5rz" podUID="1652e582-9df7-47e7-ad42-d56cc98d0a95" Jul 9 23:47:19.452562 containerd[1923]: time="2025-07-09T23:47:19.452519379Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 23:47:19.455081 systemd[1]: cri-containerd-7276bc65b27cfee4943eff4812fa9eae37a6657226d49dd76a9167b15838b889.scope: Deactivated successfully. Jul 9 23:47:19.455824 systemd[1]: cri-containerd-7276bc65b27cfee4943eff4812fa9eae37a6657226d49dd76a9167b15838b889.scope: Consumed 303ms CPU time, 190.3M memory peak, 165.8M written to disk. Jul 9 23:47:19.456911 containerd[1923]: time="2025-07-09T23:47:19.456735483Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7276bc65b27cfee4943eff4812fa9eae37a6657226d49dd76a9167b15838b889\" id:\"7276bc65b27cfee4943eff4812fa9eae37a6657226d49dd76a9167b15838b889\" pid:4138 exited_at:{seconds:1752104839 nanos:456484964}" Jul 9 23:47:19.456971 containerd[1923]: time="2025-07-09T23:47:19.456815862Z" level=info msg="received exit event container_id:\"7276bc65b27cfee4943eff4812fa9eae37a6657226d49dd76a9167b15838b889\" id:\"7276bc65b27cfee4943eff4812fa9eae37a6657226d49dd76a9167b15838b889\" pid:4138 exited_at:{seconds:1752104839 nanos:456484964}" Jul 9 23:47:19.471115 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7276bc65b27cfee4943eff4812fa9eae37a6657226d49dd76a9167b15838b889-rootfs.mount: Deactivated successfully. Jul 9 23:47:19.534342 kubelet[3395]: I0709 23:47:19.534314 3395 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 9 23:47:19.844498 kubelet[3395]: I0709 23:47:19.662929 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p99j\" (UniqueName: \"kubernetes.io/projected/8380587f-2205-4833-898e-ec294776d38c-kube-api-access-9p99j\") pod \"whisker-754c546765-2d42m\" (UID: \"8380587f-2205-4833-898e-ec294776d38c\") " pod="calico-system/whisker-754c546765-2d42m" Jul 9 23:47:19.844498 kubelet[3395]: I0709 23:47:19.662953 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8b6f\" (UniqueName: \"kubernetes.io/projected/5351ad3a-79dc-4ade-b519-9b8bd32331ed-kube-api-access-j8b6f\") pod \"calico-kube-controllers-6ff74f4fb8-w9w6s\" (UID: \"5351ad3a-79dc-4ade-b519-9b8bd32331ed\") " pod="calico-system/calico-kube-controllers-6ff74f4fb8-w9w6s" Jul 9 23:47:19.844498 kubelet[3395]: I0709 23:47:19.662968 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a1f01476-8864-45e5-a57f-ac547a07aedb-goldmane-key-pair\") pod \"goldmane-58fd7646b9-cpv7t\" (UID: \"a1f01476-8864-45e5-a57f-ac547a07aedb\") " pod="calico-system/goldmane-58fd7646b9-cpv7t" Jul 9 23:47:19.844498 kubelet[3395]: I0709 23:47:19.662980 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8380587f-2205-4833-898e-ec294776d38c-whisker-backend-key-pair\") pod \"whisker-754c546765-2d42m\" (UID: \"8380587f-2205-4833-898e-ec294776d38c\") " pod="calico-system/whisker-754c546765-2d42m" Jul 9 23:47:19.844498 kubelet[3395]: I0709 23:47:19.662991 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34683483-f4f9-4403-b516-8041d5bb8797-config-volume\") pod \"coredns-7c65d6cfc9-fqhgt\" (UID: \"34683483-f4f9-4403-b516-8041d5bb8797\") " pod="kube-system/coredns-7c65d6cfc9-fqhgt" Jul 9 23:47:19.578369 systemd[1]: Created slice kubepods-burstable-pod41443e43_d633_4a6f_8353_8ac0a7cc324a.slice - libcontainer container kubepods-burstable-pod41443e43_d633_4a6f_8353_8ac0a7cc324a.slice. Jul 9 23:47:19.844775 kubelet[3395]: I0709 23:47:19.663005 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f01476-8864-45e5-a57f-ac547a07aedb-config\") pod \"goldmane-58fd7646b9-cpv7t\" (UID: \"a1f01476-8864-45e5-a57f-ac547a07aedb\") " pod="calico-system/goldmane-58fd7646b9-cpv7t" Jul 9 23:47:19.844775 kubelet[3395]: I0709 23:47:19.663015 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f46kx\" (UniqueName: \"kubernetes.io/projected/a1f01476-8864-45e5-a57f-ac547a07aedb-kube-api-access-f46kx\") pod \"goldmane-58fd7646b9-cpv7t\" (UID: \"a1f01476-8864-45e5-a57f-ac547a07aedb\") " pod="calico-system/goldmane-58fd7646b9-cpv7t" Jul 9 23:47:19.844775 kubelet[3395]: I0709 23:47:19.663026 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8380587f-2205-4833-898e-ec294776d38c-whisker-ca-bundle\") pod \"whisker-754c546765-2d42m\" (UID: \"8380587f-2205-4833-898e-ec294776d38c\") " pod="calico-system/whisker-754c546765-2d42m" Jul 9 23:47:19.844775 kubelet[3395]: I0709 23:47:19.663037 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9cfb4053-28d0-4909-b1fd-e33040f413d8-calico-apiserver-certs\") pod \"calico-apiserver-dd99d6fcb-klwdl\" (UID: \"9cfb4053-28d0-4909-b1fd-e33040f413d8\") " pod="calico-apiserver/calico-apiserver-dd99d6fcb-klwdl" Jul 9 23:47:19.844775 kubelet[3395]: I0709 23:47:19.663047 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xppvq\" (UniqueName: \"kubernetes.io/projected/41443e43-d633-4a6f-8353-8ac0a7cc324a-kube-api-access-xppvq\") pod \"coredns-7c65d6cfc9-qhgc2\" (UID: \"41443e43-d633-4a6f-8353-8ac0a7cc324a\") " pod="kube-system/coredns-7c65d6cfc9-qhgc2" Jul 9 23:47:19.590139 systemd[1]: Created slice kubepods-besteffort-pod9cfb4053_28d0_4909_b1fd_e33040f413d8.slice - libcontainer container kubepods-besteffort-pod9cfb4053_28d0_4909_b1fd_e33040f413d8.slice. Jul 9 23:47:19.844913 kubelet[3395]: I0709 23:47:19.663058 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp8tv\" (UniqueName: \"kubernetes.io/projected/34683483-f4f9-4403-b516-8041d5bb8797-kube-api-access-zp8tv\") pod \"coredns-7c65d6cfc9-fqhgt\" (UID: \"34683483-f4f9-4403-b516-8041d5bb8797\") " pod="kube-system/coredns-7c65d6cfc9-fqhgt" Jul 9 23:47:19.844913 kubelet[3395]: I0709 23:47:19.663068 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/774cf081-2fd4-4dbe-815a-60b8b7aa36b2-calico-apiserver-certs\") pod \"calico-apiserver-dd99d6fcb-jl4z7\" (UID: \"774cf081-2fd4-4dbe-815a-60b8b7aa36b2\") " pod="calico-apiserver/calico-apiserver-dd99d6fcb-jl4z7" Jul 9 23:47:19.844913 kubelet[3395]: I0709 23:47:19.663079 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqlvw\" (UniqueName: \"kubernetes.io/projected/774cf081-2fd4-4dbe-815a-60b8b7aa36b2-kube-api-access-pqlvw\") pod \"calico-apiserver-dd99d6fcb-jl4z7\" (UID: \"774cf081-2fd4-4dbe-815a-60b8b7aa36b2\") " pod="calico-apiserver/calico-apiserver-dd99d6fcb-jl4z7" Jul 9 23:47:19.844913 kubelet[3395]: I0709 23:47:19.663090 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5351ad3a-79dc-4ade-b519-9b8bd32331ed-tigera-ca-bundle\") pod \"calico-kube-controllers-6ff74f4fb8-w9w6s\" (UID: \"5351ad3a-79dc-4ade-b519-9b8bd32331ed\") " pod="calico-system/calico-kube-controllers-6ff74f4fb8-w9w6s" Jul 9 23:47:19.844913 kubelet[3395]: I0709 23:47:19.663099 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41443e43-d633-4a6f-8353-8ac0a7cc324a-config-volume\") pod \"coredns-7c65d6cfc9-qhgc2\" (UID: \"41443e43-d633-4a6f-8353-8ac0a7cc324a\") " pod="kube-system/coredns-7c65d6cfc9-qhgc2" Jul 9 23:47:19.601268 systemd[1]: Created slice kubepods-burstable-pod34683483_f4f9_4403_b516_8041d5bb8797.slice - libcontainer container kubepods-burstable-pod34683483_f4f9_4403_b516_8041d5bb8797.slice. Jul 9 23:47:19.845047 kubelet[3395]: I0709 23:47:19.663109 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1f01476-8864-45e5-a57f-ac547a07aedb-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-cpv7t\" (UID: \"a1f01476-8864-45e5-a57f-ac547a07aedb\") " pod="calico-system/goldmane-58fd7646b9-cpv7t" Jul 9 23:47:19.845047 kubelet[3395]: I0709 23:47:19.663119 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2plrk\" (UniqueName: \"kubernetes.io/projected/9cfb4053-28d0-4909-b1fd-e33040f413d8-kube-api-access-2plrk\") pod \"calico-apiserver-dd99d6fcb-klwdl\" (UID: \"9cfb4053-28d0-4909-b1fd-e33040f413d8\") " pod="calico-apiserver/calico-apiserver-dd99d6fcb-klwdl" Jul 9 23:47:19.608882 systemd[1]: Created slice kubepods-besteffort-pod774cf081_2fd4_4dbe_815a_60b8b7aa36b2.slice - libcontainer container kubepods-besteffort-pod774cf081_2fd4_4dbe_815a_60b8b7aa36b2.slice. Jul 9 23:47:19.614592 systemd[1]: Created slice kubepods-besteffort-pod5351ad3a_79dc_4ade_b519_9b8bd32331ed.slice - libcontainer container kubepods-besteffort-pod5351ad3a_79dc_4ade_b519_9b8bd32331ed.slice. Jul 9 23:47:19.619436 systemd[1]: Created slice kubepods-besteffort-pod8380587f_2205_4833_898e_ec294776d38c.slice - libcontainer container kubepods-besteffort-pod8380587f_2205_4833_898e_ec294776d38c.slice. Jul 9 23:47:19.627682 systemd[1]: Created slice kubepods-besteffort-poda1f01476_8864_45e5_a57f_ac547a07aedb.slice - libcontainer container kubepods-besteffort-poda1f01476_8864_45e5_a57f_ac547a07aedb.slice. Jul 9 23:47:20.145103 containerd[1923]: time="2025-07-09T23:47:20.144999471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qhgc2,Uid:41443e43-d633-4a6f-8353-8ac0a7cc324a,Namespace:kube-system,Attempt:0,}" Jul 9 23:47:20.147971 containerd[1923]: time="2025-07-09T23:47:20.147879918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-754c546765-2d42m,Uid:8380587f-2205-4833-898e-ec294776d38c,Namespace:calico-system,Attempt:0,}" Jul 9 23:47:20.151874 containerd[1923]: time="2025-07-09T23:47:20.151853103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fqhgt,Uid:34683483-f4f9-4403-b516-8041d5bb8797,Namespace:kube-system,Attempt:0,}" Jul 9 23:47:20.151979 containerd[1923]: time="2025-07-09T23:47:20.151950466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ff74f4fb8-w9w6s,Uid:5351ad3a-79dc-4ade-b519-9b8bd32331ed,Namespace:calico-system,Attempt:0,}" Jul 9 23:47:20.157601 containerd[1923]: time="2025-07-09T23:47:20.157555549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-cpv7t,Uid:a1f01476-8864-45e5-a57f-ac547a07aedb,Namespace:calico-system,Attempt:0,}" Jul 9 23:47:20.157691 containerd[1923]: time="2025-07-09T23:47:20.157558061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd99d6fcb-jl4z7,Uid:774cf081-2fd4-4dbe-815a-60b8b7aa36b2,Namespace:calico-apiserver,Attempt:0,}" Jul 9 23:47:20.159140 containerd[1923]: time="2025-07-09T23:47:20.159122260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd99d6fcb-klwdl,Uid:9cfb4053-28d0-4909-b1fd-e33040f413d8,Namespace:calico-apiserver,Attempt:0,}" Jul 9 23:47:20.426501 containerd[1923]: time="2025-07-09T23:47:20.425923932Z" level=error msg="Failed to destroy network for sandbox \"1561ca94b6cc58d61f5420c1430685661b821361c118423910d56694af742cc8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:20.457558 containerd[1923]: time="2025-07-09T23:47:20.457264374Z" level=error msg="Failed to destroy network for sandbox \"e4cd16d5e421e3df433d9510ebc90ffcea4de37f70635196f345d79c49530a5d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:20.460331 containerd[1923]: time="2025-07-09T23:47:20.460258489Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qhgc2,Uid:41443e43-d633-4a6f-8353-8ac0a7cc324a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1561ca94b6cc58d61f5420c1430685661b821361c118423910d56694af742cc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:20.460597 kubelet[3395]: E0709 23:47:20.460471 3395 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1561ca94b6cc58d61f5420c1430685661b821361c118423910d56694af742cc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:20.460597 kubelet[3395]: E0709 23:47:20.460544 3395 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1561ca94b6cc58d61f5420c1430685661b821361c118423910d56694af742cc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-qhgc2" Jul 9 23:47:20.460597 kubelet[3395]: E0709 23:47:20.460558 3395 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1561ca94b6cc58d61f5420c1430685661b821361c118423910d56694af742cc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-qhgc2" Jul 9 23:47:20.461688 kubelet[3395]: E0709 23:47:20.460593 3395 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-qhgc2_kube-system(41443e43-d633-4a6f-8353-8ac0a7cc324a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-qhgc2_kube-system(41443e43-d633-4a6f-8353-8ac0a7cc324a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1561ca94b6cc58d61f5420c1430685661b821361c118423910d56694af742cc8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-qhgc2" podUID="41443e43-d633-4a6f-8353-8ac0a7cc324a" Jul 9 23:47:20.467239 containerd[1923]: time="2025-07-09T23:47:20.467204909Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-754c546765-2d42m,Uid:8380587f-2205-4833-898e-ec294776d38c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4cd16d5e421e3df433d9510ebc90ffcea4de37f70635196f345d79c49530a5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:20.467497 kubelet[3395]: E0709 23:47:20.467370 3395 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4cd16d5e421e3df433d9510ebc90ffcea4de37f70635196f345d79c49530a5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:20.467563 kubelet[3395]: E0709 23:47:20.467498 3395 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4cd16d5e421e3df433d9510ebc90ffcea4de37f70635196f345d79c49530a5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-754c546765-2d42m" Jul 9 23:47:20.467563 kubelet[3395]: E0709 23:47:20.467523 3395 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4cd16d5e421e3df433d9510ebc90ffcea4de37f70635196f345d79c49530a5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-754c546765-2d42m" Jul 9 23:47:20.467607 kubelet[3395]: E0709 23:47:20.467559 3395 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-754c546765-2d42m_calico-system(8380587f-2205-4833-898e-ec294776d38c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-754c546765-2d42m_calico-system(8380587f-2205-4833-898e-ec294776d38c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4cd16d5e421e3df433d9510ebc90ffcea4de37f70635196f345d79c49530a5d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-754c546765-2d42m" podUID="8380587f-2205-4833-898e-ec294776d38c" Jul 9 23:47:20.473130 systemd[1]: run-netns-cni\x2de4b522b2\x2d3f32\x2d0796\x2d1a68\x2de37627917d0a.mount: Deactivated successfully. Jul 9 23:47:20.473569 systemd[1]: run-netns-cni\x2d1e0c1ac2\x2db1b7\x2d3a32\x2dfd47\x2dacd28351e04d.mount: Deactivated successfully. Jul 9 23:47:20.495694 containerd[1923]: time="2025-07-09T23:47:20.495656335Z" level=error msg="Failed to destroy network for sandbox \"e1dc9bca5b216d09f2ef705541d1d22907a0af78d7b2cc3812c2aa29e49ad71e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:20.498573 systemd[1]: run-netns-cni\x2d71ebe92a\x2d6b4f\x2d6d99\x2d518e\x2da19ad089f36f.mount: Deactivated successfully. Jul 9 23:47:20.500600 containerd[1923]: time="2025-07-09T23:47:20.500472081Z" level=error msg="Failed to destroy network for sandbox \"be1693d1ee81582ac6b4515bf76b84de5df90ea7d55932501414d19fcc3cb075\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:20.502572 systemd[1]: run-netns-cni\x2d1387015c\x2d8994\x2dc06e\x2d7697\x2d5e8d1419a2cc.mount: Deactivated successfully. Jul 9 23:47:20.507228 containerd[1923]: time="2025-07-09T23:47:20.507202606Z" level=error msg="Failed to destroy network for sandbox \"6999da1a76fa98ac5250116cd0e7119b38e55a5e9ab48e6c6093baf4de70935c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:20.508620 containerd[1923]: time="2025-07-09T23:47:20.508595048Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fqhgt,Uid:34683483-f4f9-4403-b516-8041d5bb8797,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1dc9bca5b216d09f2ef705541d1d22907a0af78d7b2cc3812c2aa29e49ad71e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:20.509315 kubelet[3395]: E0709 23:47:20.509191 3395 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1dc9bca5b216d09f2ef705541d1d22907a0af78d7b2cc3812c2aa29e49ad71e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:20.509315 kubelet[3395]: E0709 23:47:20.509257 3395 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1dc9bca5b216d09f2ef705541d1d22907a0af78d7b2cc3812c2aa29e49ad71e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-fqhgt" Jul 9 23:47:20.509315 kubelet[3395]: E0709 23:47:20.509270 3395 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1dc9bca5b216d09f2ef705541d1d22907a0af78d7b2cc3812c2aa29e49ad71e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-fqhgt" Jul 9 23:47:20.510429 kubelet[3395]: E0709 23:47:20.509686 3395 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-fqhgt_kube-system(34683483-f4f9-4403-b516-8041d5bb8797)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-fqhgt_kube-system(34683483-f4f9-4403-b516-8041d5bb8797)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1dc9bca5b216d09f2ef705541d1d22907a0af78d7b2cc3812c2aa29e49ad71e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-fqhgt" podUID="34683483-f4f9-4403-b516-8041d5bb8797" Jul 9 23:47:20.510260 systemd[1]: run-netns-cni\x2d08908080\x2d3986\x2d47df\x2da02b\x2d33f522200b40.mount: Deactivated successfully. Jul 9 23:47:20.513467 containerd[1923]: time="2025-07-09T23:47:20.513311056Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ff74f4fb8-w9w6s,Uid:5351ad3a-79dc-4ade-b519-9b8bd32331ed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"be1693d1ee81582ac6b4515bf76b84de5df90ea7d55932501414d19fcc3cb075\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:20.513775 kubelet[3395]: E0709 23:47:20.513750 3395 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be1693d1ee81582ac6b4515bf76b84de5df90ea7d55932501414d19fcc3cb075\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:20.514289 kubelet[3395]: E0709 23:47:20.514237 3395 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be1693d1ee81582ac6b4515bf76b84de5df90ea7d55932501414d19fcc3cb075\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6ff74f4fb8-w9w6s" Jul 9 23:47:20.514289 kubelet[3395]: E0709 23:47:20.514260 3395 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be1693d1ee81582ac6b4515bf76b84de5df90ea7d55932501414d19fcc3cb075\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6ff74f4fb8-w9w6s" Jul 9 23:47:20.514363 kubelet[3395]: E0709 23:47:20.514292 3395 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6ff74f4fb8-w9w6s_calico-system(5351ad3a-79dc-4ade-b519-9b8bd32331ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6ff74f4fb8-w9w6s_calico-system(5351ad3a-79dc-4ade-b519-9b8bd32331ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be1693d1ee81582ac6b4515bf76b84de5df90ea7d55932501414d19fcc3cb075\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6ff74f4fb8-w9w6s" podUID="5351ad3a-79dc-4ade-b519-9b8bd32331ed" Jul 9 23:47:20.519847 containerd[1923]: time="2025-07-09T23:47:20.519823230Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-cpv7t,Uid:a1f01476-8864-45e5-a57f-ac547a07aedb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6999da1a76fa98ac5250116cd0e7119b38e55a5e9ab48e6c6093baf4de70935c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:20.520133 kubelet[3395]: E0709 23:47:20.520042 3395 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6999da1a76fa98ac5250116cd0e7119b38e55a5e9ab48e6c6093baf4de70935c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:20.520133 kubelet[3395]: E0709 23:47:20.520073 3395 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6999da1a76fa98ac5250116cd0e7119b38e55a5e9ab48e6c6093baf4de70935c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-cpv7t" Jul 9 23:47:20.520133 kubelet[3395]: E0709 23:47:20.520085 3395 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6999da1a76fa98ac5250116cd0e7119b38e55a5e9ab48e6c6093baf4de70935c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-cpv7t" Jul 9 23:47:20.520237 kubelet[3395]: E0709 23:47:20.520108 3395 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-cpv7t_calico-system(a1f01476-8864-45e5-a57f-ac547a07aedb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-cpv7t_calico-system(a1f01476-8864-45e5-a57f-ac547a07aedb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6999da1a76fa98ac5250116cd0e7119b38e55a5e9ab48e6c6093baf4de70935c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-cpv7t" podUID="a1f01476-8864-45e5-a57f-ac547a07aedb" Jul 9 23:47:20.521100 containerd[1923]: time="2025-07-09T23:47:20.521081028Z" level=error msg="Failed to destroy network for sandbox \"4533aa3e818f7d0e7f65ecedead4e78ebda359ba619a92fcfd821a9f4e8bc257\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:20.523524 containerd[1923]: time="2025-07-09T23:47:20.523479133Z" level=error msg="Failed to destroy network for sandbox \"4c554d777aad8faf861678cf86140b1f04313e248ff07d84bf1e60f252415f15\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:20.523829 systemd[1]: run-netns-cni\x2d5078721f\x2d17fe\x2d5ccc\x2df786\x2d8ca38fbd3bf3.mount: Deactivated successfully. Jul 9 23:47:20.527849 containerd[1923]: time="2025-07-09T23:47:20.527824946Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd99d6fcb-jl4z7,Uid:774cf081-2fd4-4dbe-815a-60b8b7aa36b2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4533aa3e818f7d0e7f65ecedead4e78ebda359ba619a92fcfd821a9f4e8bc257\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:20.528082 kubelet[3395]: E0709 23:47:20.528041 3395 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4533aa3e818f7d0e7f65ecedead4e78ebda359ba619a92fcfd821a9f4e8bc257\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:20.528082 kubelet[3395]: E0709 23:47:20.528074 3395 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4533aa3e818f7d0e7f65ecedead4e78ebda359ba619a92fcfd821a9f4e8bc257\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dd99d6fcb-jl4z7" Jul 9 23:47:20.528264 kubelet[3395]: E0709 23:47:20.528085 3395 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4533aa3e818f7d0e7f65ecedead4e78ebda359ba619a92fcfd821a9f4e8bc257\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dd99d6fcb-jl4z7" Jul 9 23:47:20.528264 kubelet[3395]: E0709 23:47:20.528114 3395 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dd99d6fcb-jl4z7_calico-apiserver(774cf081-2fd4-4dbe-815a-60b8b7aa36b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dd99d6fcb-jl4z7_calico-apiserver(774cf081-2fd4-4dbe-815a-60b8b7aa36b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4533aa3e818f7d0e7f65ecedead4e78ebda359ba619a92fcfd821a9f4e8bc257\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dd99d6fcb-jl4z7" podUID="774cf081-2fd4-4dbe-815a-60b8b7aa36b2" Jul 9 23:47:20.534685 containerd[1923]: time="2025-07-09T23:47:20.534652969Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd99d6fcb-klwdl,Uid:9cfb4053-28d0-4909-b1fd-e33040f413d8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c554d777aad8faf861678cf86140b1f04313e248ff07d84bf1e60f252415f15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:20.534887 kubelet[3395]: E0709 23:47:20.534774 3395 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c554d777aad8faf861678cf86140b1f04313e248ff07d84bf1e60f252415f15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:20.534887 kubelet[3395]: E0709 23:47:20.534805 3395 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c554d777aad8faf861678cf86140b1f04313e248ff07d84bf1e60f252415f15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dd99d6fcb-klwdl" Jul 9 23:47:20.534887 kubelet[3395]: E0709 23:47:20.534816 3395 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c554d777aad8faf861678cf86140b1f04313e248ff07d84bf1e60f252415f15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dd99d6fcb-klwdl" Jul 9 23:47:20.535020 kubelet[3395]: E0709 23:47:20.534984 3395 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dd99d6fcb-klwdl_calico-apiserver(9cfb4053-28d0-4909-b1fd-e33040f413d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dd99d6fcb-klwdl_calico-apiserver(9cfb4053-28d0-4909-b1fd-e33040f413d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c554d777aad8faf861678cf86140b1f04313e248ff07d84bf1e60f252415f15\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dd99d6fcb-klwdl" podUID="9cfb4053-28d0-4909-b1fd-e33040f413d8" Jul 9 23:47:20.886346 systemd[1]: Created slice kubepods-besteffort-pod1652e582_9df7_47e7_ad42_d56cc98d0a95.slice - libcontainer container kubepods-besteffort-pod1652e582_9df7_47e7_ad42_d56cc98d0a95.slice. Jul 9 23:47:20.888660 containerd[1923]: time="2025-07-09T23:47:20.888617270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vn5rz,Uid:1652e582-9df7-47e7-ad42-d56cc98d0a95,Namespace:calico-system,Attempt:0,}" Jul 9 23:47:20.931814 containerd[1923]: time="2025-07-09T23:47:20.931741527Z" level=error msg="Failed to destroy network for sandbox \"1512561681275ac5952e1fff2aa96e159295053b7405dcd3113acc9e15db83ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:20.936463 containerd[1923]: time="2025-07-09T23:47:20.936408845Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vn5rz,Uid:1652e582-9df7-47e7-ad42-d56cc98d0a95,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1512561681275ac5952e1fff2aa96e159295053b7405dcd3113acc9e15db83ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:20.936710 kubelet[3395]: E0709 23:47:20.936616 3395 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1512561681275ac5952e1fff2aa96e159295053b7405dcd3113acc9e15db83ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:20.936710 kubelet[3395]: E0709 23:47:20.936656 3395 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1512561681275ac5952e1fff2aa96e159295053b7405dcd3113acc9e15db83ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vn5rz" Jul 9 23:47:20.936710 kubelet[3395]: E0709 23:47:20.936668 3395 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1512561681275ac5952e1fff2aa96e159295053b7405dcd3113acc9e15db83ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vn5rz" Jul 9 23:47:20.936806 kubelet[3395]: E0709 23:47:20.936696 3395 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vn5rz_calico-system(1652e582-9df7-47e7-ad42-d56cc98d0a95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vn5rz_calico-system(1652e582-9df7-47e7-ad42-d56cc98d0a95)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1512561681275ac5952e1fff2aa96e159295053b7405dcd3113acc9e15db83ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vn5rz" podUID="1652e582-9df7-47e7-ad42-d56cc98d0a95" Jul 9 23:47:20.975936 containerd[1923]: time="2025-07-09T23:47:20.975908119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 9 23:47:21.470751 systemd[1]: run-netns-cni\x2d27c293d0\x2da594\x2d849f\x2df1fe\x2d618c85701880.mount: Deactivated successfully. Jul 9 23:47:27.312615 kubelet[3395]: I0709 23:47:27.312417 3395 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 23:47:29.851024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4253917461.mount: Deactivated successfully. Jul 9 23:47:30.883016 containerd[1923]: time="2025-07-09T23:47:30.882963885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-754c546765-2d42m,Uid:8380587f-2205-4833-898e-ec294776d38c,Namespace:calico-system,Attempt:0,}" Jul 9 23:47:30.883369 containerd[1923]: time="2025-07-09T23:47:30.882963885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd99d6fcb-klwdl,Uid:9cfb4053-28d0-4909-b1fd-e33040f413d8,Namespace:calico-apiserver,Attempt:0,}" Jul 9 23:47:31.884063 containerd[1923]: time="2025-07-09T23:47:31.883929759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-cpv7t,Uid:a1f01476-8864-45e5-a57f-ac547a07aedb,Namespace:calico-system,Attempt:0,}" Jul 9 23:47:32.883651 containerd[1923]: time="2025-07-09T23:47:32.883459627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fqhgt,Uid:34683483-f4f9-4403-b516-8041d5bb8797,Namespace:kube-system,Attempt:0,}" Jul 9 23:47:32.883861 containerd[1923]: time="2025-07-09T23:47:32.883834080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd99d6fcb-jl4z7,Uid:774cf081-2fd4-4dbe-815a-60b8b7aa36b2,Namespace:calico-apiserver,Attempt:0,}" Jul 9 23:47:32.884078 containerd[1923]: time="2025-07-09T23:47:32.884057703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vn5rz,Uid:1652e582-9df7-47e7-ad42-d56cc98d0a95,Namespace:calico-system,Attempt:0,}" Jul 9 23:47:35.885711 containerd[1923]: time="2025-07-09T23:47:35.885505584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qhgc2,Uid:41443e43-d633-4a6f-8353-8ac0a7cc324a,Namespace:kube-system,Attempt:0,}" Jul 9 23:47:35.885711 containerd[1923]: time="2025-07-09T23:47:35.885646637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ff74f4fb8-w9w6s,Uid:5351ad3a-79dc-4ade-b519-9b8bd32331ed,Namespace:calico-system,Attempt:0,}" Jul 9 23:47:35.941141 containerd[1923]: time="2025-07-09T23:47:35.941073882Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:35.959477 containerd[1923]: time="2025-07-09T23:47:35.959445195Z" level=error msg="Failed to destroy network for sandbox \"c7016f4eb07bc203dea41e6c14f9c970f5e5e3859295847baddac5a690c9b5f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:35.966433 containerd[1923]: time="2025-07-09T23:47:35.966365696Z" level=error msg="Failed to destroy network for sandbox \"2b7558bf6340b807c632d963d541fc50e1bf31b6df6f72915b19cf09c283792b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:35.972156 containerd[1923]: time="2025-07-09T23:47:35.972122775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 9 23:47:35.983812 containerd[1923]: time="2025-07-09T23:47:35.983747697Z" level=error msg="Failed to destroy network for sandbox \"a2c326d65c42724fa2dea2b324d1e5254a7a1e2a715819b142de3548c32d3785\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:35.998261 containerd[1923]: time="2025-07-09T23:47:35.998218416Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-754c546765-2d42m,Uid:8380587f-2205-4833-898e-ec294776d38c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7016f4eb07bc203dea41e6c14f9c970f5e5e3859295847baddac5a690c9b5f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:35.998633 containerd[1923]: time="2025-07-09T23:47:35.998478369Z" level=error msg="Failed to destroy network for sandbox \"62679a3aafcca974517d60778c0efc94b2b450c1a36fde08b9d9ad0d79e390d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:36.000606 kubelet[3395]: E0709 23:47:36.000574 3395 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7016f4eb07bc203dea41e6c14f9c970f5e5e3859295847baddac5a690c9b5f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:36.001705 kubelet[3395]: E0709 23:47:36.001412 3395 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7016f4eb07bc203dea41e6c14f9c970f5e5e3859295847baddac5a690c9b5f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-754c546765-2d42m" Jul 9 23:47:36.001705 kubelet[3395]: E0709 23:47:36.001526 3395 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7016f4eb07bc203dea41e6c14f9c970f5e5e3859295847baddac5a690c9b5f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-754c546765-2d42m" Jul 9 23:47:36.001705 kubelet[3395]: E0709 23:47:36.001566 3395 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-754c546765-2d42m_calico-system(8380587f-2205-4833-898e-ec294776d38c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-754c546765-2d42m_calico-system(8380587f-2205-4833-898e-ec294776d38c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7016f4eb07bc203dea41e6c14f9c970f5e5e3859295847baddac5a690c9b5f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-754c546765-2d42m" podUID="8380587f-2205-4833-898e-ec294776d38c" Jul 9 23:47:36.015912 containerd[1923]: time="2025-07-09T23:47:36.015838768Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd99d6fcb-klwdl,Uid:9cfb4053-28d0-4909-b1fd-e33040f413d8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b7558bf6340b807c632d963d541fc50e1bf31b6df6f72915b19cf09c283792b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:36.016430 kubelet[3395]: E0709 23:47:36.016115 3395 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b7558bf6340b807c632d963d541fc50e1bf31b6df6f72915b19cf09c283792b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:36.016430 kubelet[3395]: E0709 23:47:36.016153 3395 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b7558bf6340b807c632d963d541fc50e1bf31b6df6f72915b19cf09c283792b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dd99d6fcb-klwdl" Jul 9 23:47:36.016430 kubelet[3395]: E0709 23:47:36.016166 3395 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b7558bf6340b807c632d963d541fc50e1bf31b6df6f72915b19cf09c283792b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dd99d6fcb-klwdl" Jul 9 23:47:36.016670 kubelet[3395]: E0709 23:47:36.016198 3395 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dd99d6fcb-klwdl_calico-apiserver(9cfb4053-28d0-4909-b1fd-e33040f413d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dd99d6fcb-klwdl_calico-apiserver(9cfb4053-28d0-4909-b1fd-e33040f413d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b7558bf6340b807c632d963d541fc50e1bf31b6df6f72915b19cf09c283792b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dd99d6fcb-klwdl" podUID="9cfb4053-28d0-4909-b1fd-e33040f413d8" Jul 9 23:47:36.027294 containerd[1923]: time="2025-07-09T23:47:36.027266275Z" level=error msg="Failed to destroy network for sandbox \"6fa0ec1611ba1339bdc19ddedd481510b817bc8ee6feb69eb7f4a8b40aaf2104\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:36.030958 containerd[1923]: time="2025-07-09T23:47:36.030528519Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:36.035095 containerd[1923]: time="2025-07-09T23:47:36.035066438Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-cpv7t,Uid:a1f01476-8864-45e5-a57f-ac547a07aedb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2c326d65c42724fa2dea2b324d1e5254a7a1e2a715819b142de3548c32d3785\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:36.035345 kubelet[3395]: E0709 23:47:36.035319 3395 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2c326d65c42724fa2dea2b324d1e5254a7a1e2a715819b142de3548c32d3785\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:36.035421 kubelet[3395]: E0709 23:47:36.035353 3395 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2c326d65c42724fa2dea2b324d1e5254a7a1e2a715819b142de3548c32d3785\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-cpv7t" Jul 9 23:47:36.035421 kubelet[3395]: E0709 23:47:36.035367 3395 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2c326d65c42724fa2dea2b324d1e5254a7a1e2a715819b142de3548c32d3785\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-cpv7t" Jul 9 23:47:36.035421 kubelet[3395]: E0709 23:47:36.035394 3395 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-cpv7t_calico-system(a1f01476-8864-45e5-a57f-ac547a07aedb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-cpv7t_calico-system(a1f01476-8864-45e5-a57f-ac547a07aedb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2c326d65c42724fa2dea2b324d1e5254a7a1e2a715819b142de3548c32d3785\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-cpv7t" podUID="a1f01476-8864-45e5-a57f-ac547a07aedb" Jul 9 23:47:36.039646 containerd[1923]: time="2025-07-09T23:47:36.039072507Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fqhgt,Uid:34683483-f4f9-4403-b516-8041d5bb8797,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"62679a3aafcca974517d60778c0efc94b2b450c1a36fde08b9d9ad0d79e390d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:36.039746 kubelet[3395]: E0709 23:47:36.039345 3395 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62679a3aafcca974517d60778c0efc94b2b450c1a36fde08b9d9ad0d79e390d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:36.039746 kubelet[3395]: E0709 23:47:36.039477 3395 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62679a3aafcca974517d60778c0efc94b2b450c1a36fde08b9d9ad0d79e390d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-fqhgt" Jul 9 23:47:36.039746 kubelet[3395]: E0709 23:47:36.039494 3395 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62679a3aafcca974517d60778c0efc94b2b450c1a36fde08b9d9ad0d79e390d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-fqhgt" Jul 9 23:47:36.039918 kubelet[3395]: E0709 23:47:36.039523 3395 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-fqhgt_kube-system(34683483-f4f9-4403-b516-8041d5bb8797)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-fqhgt_kube-system(34683483-f4f9-4403-b516-8041d5bb8797)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62679a3aafcca974517d60778c0efc94b2b450c1a36fde08b9d9ad0d79e390d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-fqhgt" podUID="34683483-f4f9-4403-b516-8041d5bb8797" Jul 9 23:47:36.047222 containerd[1923]: time="2025-07-09T23:47:36.045562282Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd99d6fcb-jl4z7,Uid:774cf081-2fd4-4dbe-815a-60b8b7aa36b2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fa0ec1611ba1339bdc19ddedd481510b817bc8ee6feb69eb7f4a8b40aaf2104\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:36.047541 kubelet[3395]: E0709 23:47:36.047423 3395 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fa0ec1611ba1339bdc19ddedd481510b817bc8ee6feb69eb7f4a8b40aaf2104\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:36.047541 kubelet[3395]: E0709 23:47:36.047472 3395 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fa0ec1611ba1339bdc19ddedd481510b817bc8ee6feb69eb7f4a8b40aaf2104\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dd99d6fcb-jl4z7" Jul 9 23:47:36.047541 kubelet[3395]: E0709 23:47:36.047484 3395 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fa0ec1611ba1339bdc19ddedd481510b817bc8ee6feb69eb7f4a8b40aaf2104\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dd99d6fcb-jl4z7" Jul 9 23:47:36.047648 kubelet[3395]: E0709 23:47:36.047510 3395 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dd99d6fcb-jl4z7_calico-apiserver(774cf081-2fd4-4dbe-815a-60b8b7aa36b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dd99d6fcb-jl4z7_calico-apiserver(774cf081-2fd4-4dbe-815a-60b8b7aa36b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6fa0ec1611ba1339bdc19ddedd481510b817bc8ee6feb69eb7f4a8b40aaf2104\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dd99d6fcb-jl4z7" podUID="774cf081-2fd4-4dbe-815a-60b8b7aa36b2" Jul 9 23:47:36.051380 containerd[1923]: time="2025-07-09T23:47:36.051114402Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:36.052543 containerd[1923]: time="2025-07-09T23:47:36.052506776Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 15.076569048s" Jul 9 23:47:36.052543 containerd[1923]: time="2025-07-09T23:47:36.052535625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 9 23:47:36.054692 containerd[1923]: time="2025-07-09T23:47:36.054663335Z" level=error msg="Failed to destroy network for sandbox \"b538da5aaa2372abbde30c04e63b833eaac6593fb11cca5b53638e9ecc3da5b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:36.065230 containerd[1923]: time="2025-07-09T23:47:36.064898667Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vn5rz,Uid:1652e582-9df7-47e7-ad42-d56cc98d0a95,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b538da5aaa2372abbde30c04e63b833eaac6593fb11cca5b53638e9ecc3da5b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:36.065321 kubelet[3395]: E0709 23:47:36.065037 3395 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b538da5aaa2372abbde30c04e63b833eaac6593fb11cca5b53638e9ecc3da5b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:36.065321 kubelet[3395]: E0709 23:47:36.065066 3395 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b538da5aaa2372abbde30c04e63b833eaac6593fb11cca5b53638e9ecc3da5b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vn5rz" Jul 9 23:47:36.065321 kubelet[3395]: E0709 23:47:36.065077 3395 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b538da5aaa2372abbde30c04e63b833eaac6593fb11cca5b53638e9ecc3da5b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vn5rz" Jul 9 23:47:36.065388 kubelet[3395]: E0709 23:47:36.065102 3395 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vn5rz_calico-system(1652e582-9df7-47e7-ad42-d56cc98d0a95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vn5rz_calico-system(1652e582-9df7-47e7-ad42-d56cc98d0a95)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b538da5aaa2372abbde30c04e63b833eaac6593fb11cca5b53638e9ecc3da5b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vn5rz" podUID="1652e582-9df7-47e7-ad42-d56cc98d0a95" Jul 9 23:47:36.071019 containerd[1923]: time="2025-07-09T23:47:36.070996493Z" level=info msg="CreateContainer within sandbox \"bf19aa5f4e5077f36ed7e40bbac01fc49f987221686c3addf86fcf0569e7cd37\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 9 23:47:36.072517 containerd[1923]: time="2025-07-09T23:47:36.072487102Z" level=error msg="Failed to destroy network for sandbox \"c9045dc6eb64dffe7699fd992935b278bbdda6368dc4f0e4e17a44fc66d13455\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:36.077788 containerd[1923]: time="2025-07-09T23:47:36.077753461Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qhgc2,Uid:41443e43-d633-4a6f-8353-8ac0a7cc324a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9045dc6eb64dffe7699fd992935b278bbdda6368dc4f0e4e17a44fc66d13455\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:36.078016 kubelet[3395]: E0709 23:47:36.077975 3395 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9045dc6eb64dffe7699fd992935b278bbdda6368dc4f0e4e17a44fc66d13455\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:36.078157 kubelet[3395]: E0709 23:47:36.078108 3395 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9045dc6eb64dffe7699fd992935b278bbdda6368dc4f0e4e17a44fc66d13455\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-qhgc2" Jul 9 23:47:36.078157 kubelet[3395]: E0709 23:47:36.078127 3395 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9045dc6eb64dffe7699fd992935b278bbdda6368dc4f0e4e17a44fc66d13455\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-qhgc2" Jul 9 23:47:36.078313 kubelet[3395]: E0709 23:47:36.078282 3395 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-qhgc2_kube-system(41443e43-d633-4a6f-8353-8ac0a7cc324a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-qhgc2_kube-system(41443e43-d633-4a6f-8353-8ac0a7cc324a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c9045dc6eb64dffe7699fd992935b278bbdda6368dc4f0e4e17a44fc66d13455\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-qhgc2" podUID="41443e43-d633-4a6f-8353-8ac0a7cc324a" Jul 9 23:47:36.081799 containerd[1923]: time="2025-07-09T23:47:36.081763578Z" level=error msg="Failed to destroy network for sandbox \"3bcd45104ae7c2cf5405c2995b30cb87177488f7976d5a3dc9bf1fc7f822bf73\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:36.102450 containerd[1923]: time="2025-07-09T23:47:36.102418158Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ff74f4fb8-w9w6s,Uid:5351ad3a-79dc-4ade-b519-9b8bd32331ed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bcd45104ae7c2cf5405c2995b30cb87177488f7976d5a3dc9bf1fc7f822bf73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:36.102639 kubelet[3395]: E0709 23:47:36.102611 3395 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bcd45104ae7c2cf5405c2995b30cb87177488f7976d5a3dc9bf1fc7f822bf73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 23:47:36.102717 kubelet[3395]: E0709 23:47:36.102647 3395 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bcd45104ae7c2cf5405c2995b30cb87177488f7976d5a3dc9bf1fc7f822bf73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6ff74f4fb8-w9w6s" Jul 9 23:47:36.102717 kubelet[3395]: E0709 23:47:36.102681 3395 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bcd45104ae7c2cf5405c2995b30cb87177488f7976d5a3dc9bf1fc7f822bf73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6ff74f4fb8-w9w6s" Jul 9 23:47:36.102717 kubelet[3395]: E0709 23:47:36.102709 3395 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6ff74f4fb8-w9w6s_calico-system(5351ad3a-79dc-4ade-b519-9b8bd32331ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6ff74f4fb8-w9w6s_calico-system(5351ad3a-79dc-4ade-b519-9b8bd32331ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3bcd45104ae7c2cf5405c2995b30cb87177488f7976d5a3dc9bf1fc7f822bf73\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6ff74f4fb8-w9w6s" podUID="5351ad3a-79dc-4ade-b519-9b8bd32331ed" Jul 9 23:47:36.110060 containerd[1923]: time="2025-07-09T23:47:36.109324099Z" level=info msg="Container b54f9d314e6f1c2bd3a3710160e0c3c20704906f83bd2d9ca1eeca664893d466: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:47:36.128105 containerd[1923]: time="2025-07-09T23:47:36.128072905Z" level=info msg="CreateContainer within sandbox \"bf19aa5f4e5077f36ed7e40bbac01fc49f987221686c3addf86fcf0569e7cd37\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b54f9d314e6f1c2bd3a3710160e0c3c20704906f83bd2d9ca1eeca664893d466\"" Jul 9 23:47:36.128635 containerd[1923]: time="2025-07-09T23:47:36.128596130Z" level=info msg="StartContainer for \"b54f9d314e6f1c2bd3a3710160e0c3c20704906f83bd2d9ca1eeca664893d466\"" Jul 9 23:47:36.129583 containerd[1923]: time="2025-07-09T23:47:36.129555594Z" level=info msg="connecting to shim b54f9d314e6f1c2bd3a3710160e0c3c20704906f83bd2d9ca1eeca664893d466" address="unix:///run/containerd/s/8015cfa84f80b0eb589e9241e85f791f72e1979575ae60e084b735b070bdc772" protocol=ttrpc version=3 Jul 9 23:47:36.162301 systemd[1]: Started cri-containerd-b54f9d314e6f1c2bd3a3710160e0c3c20704906f83bd2d9ca1eeca664893d466.scope - libcontainer container b54f9d314e6f1c2bd3a3710160e0c3c20704906f83bd2d9ca1eeca664893d466. Jul 9 23:47:36.194501 containerd[1923]: time="2025-07-09T23:47:36.194472562Z" level=info msg="StartContainer for \"b54f9d314e6f1c2bd3a3710160e0c3c20704906f83bd2d9ca1eeca664893d466\" returns successfully" Jul 9 23:47:36.391706 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 9 23:47:36.391959 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 9 23:47:36.821151 systemd[1]: run-netns-cni\x2df8600845\x2d3f59\x2d6423\x2d204a\x2d3bfb286a78f3.mount: Deactivated successfully. Jul 9 23:47:36.821233 systemd[1]: run-netns-cni\x2d3b1e4a63\x2d4d82\x2dbce0\x2d88f0\x2da076fd9df338.mount: Deactivated successfully. Jul 9 23:47:36.821267 systemd[1]: run-netns-cni\x2d8475d93b\x2dfa99\x2d2fea\x2db6cc\x2d8a780a34f348.mount: Deactivated successfully. Jul 9 23:47:36.821298 systemd[1]: run-netns-cni\x2d6fbc68c9\x2dd586\x2d7fbd\x2d77b9\x2df5942c4008f3.mount: Deactivated successfully. Jul 9 23:47:36.821329 systemd[1]: run-netns-cni\x2d11433a47\x2d12c1\x2d7225\x2d8e55\x2d801a951883ad.mount: Deactivated successfully. Jul 9 23:47:37.033481 kubelet[3395]: I0709 23:47:37.033052 3395 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-j2xcq" podStartSLOduration=2.410557079 podStartE2EDuration="27.03304036s" podCreationTimestamp="2025-07-09 23:47:10 +0000 UTC" firstStartedPulling="2025-07-09 23:47:11.436034534 +0000 UTC m=+19.613091448" lastFinishedPulling="2025-07-09 23:47:36.058517807 +0000 UTC m=+44.235574729" observedRunningTime="2025-07-09 23:47:37.031723357 +0000 UTC m=+45.208780271" watchObservedRunningTime="2025-07-09 23:47:37.03304036 +0000 UTC m=+45.210097274" Jul 9 23:47:37.058274 kubelet[3395]: I0709 23:47:37.058247 3395 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8380587f-2205-4833-898e-ec294776d38c-whisker-backend-key-pair\") pod \"8380587f-2205-4833-898e-ec294776d38c\" (UID: \"8380587f-2205-4833-898e-ec294776d38c\") " Jul 9 23:47:37.058274 kubelet[3395]: I0709 23:47:37.058279 3395 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9p99j\" (UniqueName: \"kubernetes.io/projected/8380587f-2205-4833-898e-ec294776d38c-kube-api-access-9p99j\") pod \"8380587f-2205-4833-898e-ec294776d38c\" (UID: \"8380587f-2205-4833-898e-ec294776d38c\") " Jul 9 23:47:37.058392 kubelet[3395]: I0709 23:47:37.058295 3395 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8380587f-2205-4833-898e-ec294776d38c-whisker-ca-bundle\") pod \"8380587f-2205-4833-898e-ec294776d38c\" (UID: \"8380587f-2205-4833-898e-ec294776d38c\") " Jul 9 23:47:37.058817 kubelet[3395]: I0709 23:47:37.058740 3395 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8380587f-2205-4833-898e-ec294776d38c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8380587f-2205-4833-898e-ec294776d38c" (UID: "8380587f-2205-4833-898e-ec294776d38c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 9 23:47:37.062275 kubelet[3395]: I0709 23:47:37.062242 3395 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8380587f-2205-4833-898e-ec294776d38c-kube-api-access-9p99j" (OuterVolumeSpecName: "kube-api-access-9p99j") pod "8380587f-2205-4833-898e-ec294776d38c" (UID: "8380587f-2205-4833-898e-ec294776d38c"). InnerVolumeSpecName "kube-api-access-9p99j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 9 23:47:37.062536 kubelet[3395]: I0709 23:47:37.062508 3395 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8380587f-2205-4833-898e-ec294776d38c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8380587f-2205-4833-898e-ec294776d38c" (UID: "8380587f-2205-4833-898e-ec294776d38c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 9 23:47:37.062548 systemd[1]: var-lib-kubelet-pods-8380587f\x2d2205\x2d4833\x2d898e\x2dec294776d38c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9p99j.mount: Deactivated successfully. Jul 9 23:47:37.062653 systemd[1]: var-lib-kubelet-pods-8380587f\x2d2205\x2d4833\x2d898e\x2dec294776d38c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 9 23:47:37.158932 kubelet[3395]: I0709 23:47:37.158835 3395 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8380587f-2205-4833-898e-ec294776d38c-whisker-backend-key-pair\") on node \"ci-4344.1.1-n-5de0cd73c3\" DevicePath \"\"" Jul 9 23:47:37.158932 kubelet[3395]: I0709 23:47:37.158865 3395 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9p99j\" (UniqueName: \"kubernetes.io/projected/8380587f-2205-4833-898e-ec294776d38c-kube-api-access-9p99j\") on node \"ci-4344.1.1-n-5de0cd73c3\" DevicePath \"\"" Jul 9 23:47:37.158932 kubelet[3395]: I0709 23:47:37.158879 3395 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8380587f-2205-4833-898e-ec294776d38c-whisker-ca-bundle\") on node \"ci-4344.1.1-n-5de0cd73c3\" DevicePath \"\"" Jul 9 23:47:37.889887 systemd[1]: Removed slice kubepods-besteffort-pod8380587f_2205_4833_898e_ec294776d38c.slice - libcontainer container kubepods-besteffort-pod8380587f_2205_4833_898e_ec294776d38c.slice. Jul 9 23:47:38.106323 systemd[1]: Created slice kubepods-besteffort-podbb0ca629_6c80_4482_b50f_4e3687d6e291.slice - libcontainer container kubepods-besteffort-podbb0ca629_6c80_4482_b50f_4e3687d6e291.slice. Jul 9 23:47:38.164111 kubelet[3395]: I0709 23:47:38.164000 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqgrk\" (UniqueName: \"kubernetes.io/projected/bb0ca629-6c80-4482-b50f-4e3687d6e291-kube-api-access-cqgrk\") pod \"whisker-8fcfc8b5f-jnz4b\" (UID: \"bb0ca629-6c80-4482-b50f-4e3687d6e291\") " pod="calico-system/whisker-8fcfc8b5f-jnz4b" Jul 9 23:47:38.164111 kubelet[3395]: I0709 23:47:38.164041 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bb0ca629-6c80-4482-b50f-4e3687d6e291-whisker-backend-key-pair\") pod \"whisker-8fcfc8b5f-jnz4b\" (UID: \"bb0ca629-6c80-4482-b50f-4e3687d6e291\") " pod="calico-system/whisker-8fcfc8b5f-jnz4b" Jul 9 23:47:38.164661 kubelet[3395]: I0709 23:47:38.164348 3395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb0ca629-6c80-4482-b50f-4e3687d6e291-whisker-ca-bundle\") pod \"whisker-8fcfc8b5f-jnz4b\" (UID: \"bb0ca629-6c80-4482-b50f-4e3687d6e291\") " pod="calico-system/whisker-8fcfc8b5f-jnz4b" Jul 9 23:47:38.226252 systemd-networkd[1490]: vxlan.calico: Link UP Jul 9 23:47:38.226260 systemd-networkd[1490]: vxlan.calico: Gained carrier Jul 9 23:47:38.409125 containerd[1923]: time="2025-07-09T23:47:38.408924857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8fcfc8b5f-jnz4b,Uid:bb0ca629-6c80-4482-b50f-4e3687d6e291,Namespace:calico-system,Attempt:0,}" Jul 9 23:47:38.683646 systemd-networkd[1490]: calif80c865b81a: Link UP Jul 9 23:47:38.684408 systemd-networkd[1490]: calif80c865b81a: Gained carrier Jul 9 23:47:38.698645 containerd[1923]: 2025-07-09 23:47:38.629 [INFO][4885] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--n--5de0cd73c3-k8s-whisker--8fcfc8b5f--jnz4b-eth0 whisker-8fcfc8b5f- calico-system bb0ca629-6c80-4482-b50f-4e3687d6e291 919 0 2025-07-09 23:47:38 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8fcfc8b5f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4344.1.1-n-5de0cd73c3 whisker-8fcfc8b5f-jnz4b eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif80c865b81a [] [] }} ContainerID="645ae9a5070611c1417b9682c0312f5b0ae6af25ddf5a3280d8751abef990110" Namespace="calico-system" Pod="whisker-8fcfc8b5f-jnz4b" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-whisker--8fcfc8b5f--jnz4b-" Jul 9 23:47:38.698645 containerd[1923]: 2025-07-09 23:47:38.629 [INFO][4885] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="645ae9a5070611c1417b9682c0312f5b0ae6af25ddf5a3280d8751abef990110" Namespace="calico-system" Pod="whisker-8fcfc8b5f-jnz4b" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-whisker--8fcfc8b5f--jnz4b-eth0" Jul 9 23:47:38.698645 containerd[1923]: 2025-07-09 23:47:38.647 [INFO][4898] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="645ae9a5070611c1417b9682c0312f5b0ae6af25ddf5a3280d8751abef990110" HandleID="k8s-pod-network.645ae9a5070611c1417b9682c0312f5b0ae6af25ddf5a3280d8751abef990110" Workload="ci--4344.1.1--n--5de0cd73c3-k8s-whisker--8fcfc8b5f--jnz4b-eth0" Jul 9 23:47:38.698812 containerd[1923]: 2025-07-09 23:47:38.647 [INFO][4898] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="645ae9a5070611c1417b9682c0312f5b0ae6af25ddf5a3280d8751abef990110" HandleID="k8s-pod-network.645ae9a5070611c1417b9682c0312f5b0ae6af25ddf5a3280d8751abef990110" Workload="ci--4344.1.1--n--5de0cd73c3-k8s-whisker--8fcfc8b5f--jnz4b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3690), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.1-n-5de0cd73c3", "pod":"whisker-8fcfc8b5f-jnz4b", "timestamp":"2025-07-09 23:47:38.647861682 +0000 UTC"}, Hostname:"ci-4344.1.1-n-5de0cd73c3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 23:47:38.698812 containerd[1923]: 2025-07-09 23:47:38.648 [INFO][4898] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 23:47:38.698812 containerd[1923]: 2025-07-09 23:47:38.648 [INFO][4898] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 23:47:38.698812 containerd[1923]: 2025-07-09 23:47:38.648 [INFO][4898] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-n-5de0cd73c3' Jul 9 23:47:38.698812 containerd[1923]: 2025-07-09 23:47:38.653 [INFO][4898] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.645ae9a5070611c1417b9682c0312f5b0ae6af25ddf5a3280d8751abef990110" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:38.698812 containerd[1923]: 2025-07-09 23:47:38.656 [INFO][4898] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:38.698812 containerd[1923]: 2025-07-09 23:47:38.660 [INFO][4898] ipam/ipam.go 511: Trying affinity for 192.168.27.192/26 host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:38.698812 containerd[1923]: 2025-07-09 23:47:38.662 [INFO][4898] ipam/ipam.go 158: Attempting to load block cidr=192.168.27.192/26 host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:38.698812 containerd[1923]: 2025-07-09 23:47:38.663 [INFO][4898] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.27.192/26 host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:38.698944 containerd[1923]: 2025-07-09 23:47:38.663 [INFO][4898] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.27.192/26 handle="k8s-pod-network.645ae9a5070611c1417b9682c0312f5b0ae6af25ddf5a3280d8751abef990110" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:38.698944 containerd[1923]: 2025-07-09 23:47:38.665 [INFO][4898] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.645ae9a5070611c1417b9682c0312f5b0ae6af25ddf5a3280d8751abef990110 Jul 9 23:47:38.698944 containerd[1923]: 2025-07-09 23:47:38.669 [INFO][4898] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.27.192/26 handle="k8s-pod-network.645ae9a5070611c1417b9682c0312f5b0ae6af25ddf5a3280d8751abef990110" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:38.698944 containerd[1923]: 2025-07-09 23:47:38.678 [INFO][4898] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.27.193/26] block=192.168.27.192/26 handle="k8s-pod-network.645ae9a5070611c1417b9682c0312f5b0ae6af25ddf5a3280d8751abef990110" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:38.698944 containerd[1923]: 2025-07-09 23:47:38.678 [INFO][4898] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.27.193/26] handle="k8s-pod-network.645ae9a5070611c1417b9682c0312f5b0ae6af25ddf5a3280d8751abef990110" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:38.698944 containerd[1923]: 2025-07-09 23:47:38.678 [INFO][4898] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 23:47:38.698944 containerd[1923]: 2025-07-09 23:47:38.678 [INFO][4898] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.27.193/26] IPv6=[] ContainerID="645ae9a5070611c1417b9682c0312f5b0ae6af25ddf5a3280d8751abef990110" HandleID="k8s-pod-network.645ae9a5070611c1417b9682c0312f5b0ae6af25ddf5a3280d8751abef990110" Workload="ci--4344.1.1--n--5de0cd73c3-k8s-whisker--8fcfc8b5f--jnz4b-eth0" Jul 9 23:47:38.699040 containerd[1923]: 2025-07-09 23:47:38.680 [INFO][4885] cni-plugin/k8s.go 418: Populated endpoint ContainerID="645ae9a5070611c1417b9682c0312f5b0ae6af25ddf5a3280d8751abef990110" Namespace="calico-system" Pod="whisker-8fcfc8b5f-jnz4b" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-whisker--8fcfc8b5f--jnz4b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5de0cd73c3-k8s-whisker--8fcfc8b5f--jnz4b-eth0", GenerateName:"whisker-8fcfc8b5f-", Namespace:"calico-system", SelfLink:"", UID:"bb0ca629-6c80-4482-b50f-4e3687d6e291", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 47, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8fcfc8b5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5de0cd73c3", ContainerID:"", Pod:"whisker-8fcfc8b5f-jnz4b", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.27.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif80c865b81a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:47:38.699040 containerd[1923]: 2025-07-09 23:47:38.680 [INFO][4885] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.27.193/32] ContainerID="645ae9a5070611c1417b9682c0312f5b0ae6af25ddf5a3280d8751abef990110" Namespace="calico-system" Pod="whisker-8fcfc8b5f-jnz4b" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-whisker--8fcfc8b5f--jnz4b-eth0" Jul 9 23:47:38.699087 containerd[1923]: 2025-07-09 23:47:38.680 [INFO][4885] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif80c865b81a ContainerID="645ae9a5070611c1417b9682c0312f5b0ae6af25ddf5a3280d8751abef990110" Namespace="calico-system" Pod="whisker-8fcfc8b5f-jnz4b" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-whisker--8fcfc8b5f--jnz4b-eth0" Jul 9 23:47:38.699087 containerd[1923]: 2025-07-09 23:47:38.684 [INFO][4885] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="645ae9a5070611c1417b9682c0312f5b0ae6af25ddf5a3280d8751abef990110" Namespace="calico-system" Pod="whisker-8fcfc8b5f-jnz4b" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-whisker--8fcfc8b5f--jnz4b-eth0" Jul 9 23:47:38.699113 containerd[1923]: 2025-07-09 23:47:38.685 [INFO][4885] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="645ae9a5070611c1417b9682c0312f5b0ae6af25ddf5a3280d8751abef990110" Namespace="calico-system" Pod="whisker-8fcfc8b5f-jnz4b" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-whisker--8fcfc8b5f--jnz4b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5de0cd73c3-k8s-whisker--8fcfc8b5f--jnz4b-eth0", GenerateName:"whisker-8fcfc8b5f-", Namespace:"calico-system", SelfLink:"", UID:"bb0ca629-6c80-4482-b50f-4e3687d6e291", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 47, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8fcfc8b5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5de0cd73c3", ContainerID:"645ae9a5070611c1417b9682c0312f5b0ae6af25ddf5a3280d8751abef990110", Pod:"whisker-8fcfc8b5f-jnz4b", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.27.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif80c865b81a", MAC:"62:b9:2a:7a:53:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:47:38.699145 containerd[1923]: 2025-07-09 23:47:38.696 [INFO][4885] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="645ae9a5070611c1417b9682c0312f5b0ae6af25ddf5a3280d8751abef990110" Namespace="calico-system" Pod="whisker-8fcfc8b5f-jnz4b" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-whisker--8fcfc8b5f--jnz4b-eth0" Jul 9 23:47:38.793919 containerd[1923]: time="2025-07-09T23:47:38.793883194Z" level=info msg="connecting to shim 645ae9a5070611c1417b9682c0312f5b0ae6af25ddf5a3280d8751abef990110" address="unix:///run/containerd/s/7b3e48daa5bd3e32bc3212c93032c3b1d19e2860966f49d00314b08a61fc4ff3" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:47:38.816773 systemd[1]: Started cri-containerd-645ae9a5070611c1417b9682c0312f5b0ae6af25ddf5a3280d8751abef990110.scope - libcontainer container 645ae9a5070611c1417b9682c0312f5b0ae6af25ddf5a3280d8751abef990110. Jul 9 23:47:38.859435 containerd[1923]: time="2025-07-09T23:47:38.859399454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8fcfc8b5f-jnz4b,Uid:bb0ca629-6c80-4482-b50f-4e3687d6e291,Namespace:calico-system,Attempt:0,} returns sandbox id \"645ae9a5070611c1417b9682c0312f5b0ae6af25ddf5a3280d8751abef990110\"" Jul 9 23:47:38.867591 containerd[1923]: time="2025-07-09T23:47:38.867553021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 9 23:47:39.468317 systemd-networkd[1490]: vxlan.calico: Gained IPv6LL Jul 9 23:47:39.884915 kubelet[3395]: I0709 23:47:39.884877 3395 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8380587f-2205-4833-898e-ec294776d38c" path="/var/lib/kubelet/pods/8380587f-2205-4833-898e-ec294776d38c/volumes" Jul 9 23:47:39.946466 kubelet[3395]: I0709 23:47:39.946409 3395 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 23:47:40.054703 containerd[1923]: time="2025-07-09T23:47:40.054647725Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b54f9d314e6f1c2bd3a3710160e0c3c20704906f83bd2d9ca1eeca664893d466\" id:\"b4e9c7567731d916d0c3f793f58a4aa736d8e60333010a203372648bcfa42f6e\" pid:4980 exited_at:{seconds:1752104860 nanos:54197262}" Jul 9 23:47:40.130364 containerd[1923]: time="2025-07-09T23:47:40.130295905Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b54f9d314e6f1c2bd3a3710160e0c3c20704906f83bd2d9ca1eeca664893d466\" id:\"3983256347fd6e4666676acc2f119ab0722459de454f99764a545e71ba3e4967\" pid:5004 exited_at:{seconds:1752104860 nanos:129951093}" Jul 9 23:47:40.346947 containerd[1923]: time="2025-07-09T23:47:40.346894971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:40.351457 containerd[1923]: time="2025-07-09T23:47:40.351335875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 9 23:47:40.357786 containerd[1923]: time="2025-07-09T23:47:40.357754940Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:40.365240 containerd[1923]: time="2025-07-09T23:47:40.365185253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:40.365619 containerd[1923]: time="2025-07-09T23:47:40.365519328Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.497923626s" Jul 9 23:47:40.365619 containerd[1923]: time="2025-07-09T23:47:40.365547657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 9 23:47:40.368019 containerd[1923]: time="2025-07-09T23:47:40.367914758Z" level=info msg="CreateContainer within sandbox \"645ae9a5070611c1417b9682c0312f5b0ae6af25ddf5a3280d8751abef990110\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 9 23:47:40.409319 containerd[1923]: time="2025-07-09T23:47:40.408778134Z" level=info msg="Container 677b48de16d8a190fb41e787148b519bd84fb308667add8a810aba973de2bd25: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:47:40.428354 systemd-networkd[1490]: calif80c865b81a: Gained IPv6LL Jul 9 23:47:40.431748 containerd[1923]: time="2025-07-09T23:47:40.431720000Z" level=info msg="CreateContainer within sandbox \"645ae9a5070611c1417b9682c0312f5b0ae6af25ddf5a3280d8751abef990110\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"677b48de16d8a190fb41e787148b519bd84fb308667add8a810aba973de2bd25\"" Jul 9 23:47:40.432677 containerd[1923]: time="2025-07-09T23:47:40.432657934Z" level=info msg="StartContainer for \"677b48de16d8a190fb41e787148b519bd84fb308667add8a810aba973de2bd25\"" Jul 9 23:47:40.433882 containerd[1923]: time="2025-07-09T23:47:40.433857581Z" level=info msg="connecting to shim 677b48de16d8a190fb41e787148b519bd84fb308667add8a810aba973de2bd25" address="unix:///run/containerd/s/7b3e48daa5bd3e32bc3212c93032c3b1d19e2860966f49d00314b08a61fc4ff3" protocol=ttrpc version=3 Jul 9 23:47:40.452310 systemd[1]: Started cri-containerd-677b48de16d8a190fb41e787148b519bd84fb308667add8a810aba973de2bd25.scope - libcontainer container 677b48de16d8a190fb41e787148b519bd84fb308667add8a810aba973de2bd25. Jul 9 23:47:40.489529 containerd[1923]: time="2025-07-09T23:47:40.489494726Z" level=info msg="StartContainer for \"677b48de16d8a190fb41e787148b519bd84fb308667add8a810aba973de2bd25\" returns successfully" Jul 9 23:47:40.491430 containerd[1923]: time="2025-07-09T23:47:40.491392107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 9 23:47:42.540452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2065694515.mount: Deactivated successfully. Jul 9 23:47:42.658112 containerd[1923]: time="2025-07-09T23:47:42.658063441Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:42.662494 containerd[1923]: time="2025-07-09T23:47:42.662459312Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 9 23:47:42.675391 containerd[1923]: time="2025-07-09T23:47:42.675339434Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:42.681499 containerd[1923]: time="2025-07-09T23:47:42.681455545Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:42.682095 containerd[1923]: time="2025-07-09T23:47:42.681958105Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 2.190533405s" Jul 9 23:47:42.682095 containerd[1923]: time="2025-07-09T23:47:42.681987538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 9 23:47:42.684340 containerd[1923]: time="2025-07-09T23:47:42.684244180Z" level=info msg="CreateContainer within sandbox \"645ae9a5070611c1417b9682c0312f5b0ae6af25ddf5a3280d8751abef990110\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 9 23:47:42.722722 containerd[1923]: time="2025-07-09T23:47:42.722244943Z" level=info msg="Container f1152344f4ba50aab5d7a78321209bb0dd00083c166b6c175571c0ec6f76f103: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:47:42.752873 containerd[1923]: time="2025-07-09T23:47:42.752841121Z" level=info msg="CreateContainer within sandbox \"645ae9a5070611c1417b9682c0312f5b0ae6af25ddf5a3280d8751abef990110\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"f1152344f4ba50aab5d7a78321209bb0dd00083c166b6c175571c0ec6f76f103\"" Jul 9 23:47:42.753448 containerd[1923]: time="2025-07-09T23:47:42.753243862Z" level=info msg="StartContainer for \"f1152344f4ba50aab5d7a78321209bb0dd00083c166b6c175571c0ec6f76f103\"" Jul 9 23:47:42.754244 containerd[1923]: time="2025-07-09T23:47:42.754126827Z" level=info msg="connecting to shim f1152344f4ba50aab5d7a78321209bb0dd00083c166b6c175571c0ec6f76f103" address="unix:///run/containerd/s/7b3e48daa5bd3e32bc3212c93032c3b1d19e2860966f49d00314b08a61fc4ff3" protocol=ttrpc version=3 Jul 9 23:47:42.781298 systemd[1]: Started cri-containerd-f1152344f4ba50aab5d7a78321209bb0dd00083c166b6c175571c0ec6f76f103.scope - libcontainer container f1152344f4ba50aab5d7a78321209bb0dd00083c166b6c175571c0ec6f76f103. Jul 9 23:47:42.821693 containerd[1923]: time="2025-07-09T23:47:42.821397213Z" level=info msg="StartContainer for \"f1152344f4ba50aab5d7a78321209bb0dd00083c166b6c175571c0ec6f76f103\" returns successfully" Jul 9 23:47:47.884419 containerd[1923]: time="2025-07-09T23:47:47.884105348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qhgc2,Uid:41443e43-d633-4a6f-8353-8ac0a7cc324a,Namespace:kube-system,Attempt:0,}" Jul 9 23:47:47.886281 containerd[1923]: time="2025-07-09T23:47:47.884958631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-cpv7t,Uid:a1f01476-8864-45e5-a57f-ac547a07aedb,Namespace:calico-system,Attempt:0,}" Jul 9 23:47:47.886281 containerd[1923]: time="2025-07-09T23:47:47.885323755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd99d6fcb-jl4z7,Uid:774cf081-2fd4-4dbe-815a-60b8b7aa36b2,Namespace:calico-apiserver,Attempt:0,}" Jul 9 23:47:48.095983 systemd-networkd[1490]: calid79f6d1b211: Link UP Jul 9 23:47:48.096768 systemd-networkd[1490]: calid79f6d1b211: Gained carrier Jul 9 23:47:48.125426 containerd[1923]: 2025-07-09 23:47:47.969 [INFO][5102] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--n--5de0cd73c3-k8s-coredns--7c65d6cfc9--qhgc2-eth0 coredns-7c65d6cfc9- kube-system 41443e43-d633-4a6f-8353-8ac0a7cc324a 809 0 2025-07-09 23:46:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.1.1-n-5de0cd73c3 coredns-7c65d6cfc9-qhgc2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid79f6d1b211 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="29aabc6291030aa2b48dd24540d0af645fe93a8138bede21902a13aa21919a47" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qhgc2" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-coredns--7c65d6cfc9--qhgc2-" Jul 9 23:47:48.125426 containerd[1923]: 2025-07-09 23:47:47.969 [INFO][5102] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="29aabc6291030aa2b48dd24540d0af645fe93a8138bede21902a13aa21919a47" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qhgc2" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-coredns--7c65d6cfc9--qhgc2-eth0" Jul 9 23:47:48.125426 containerd[1923]: 2025-07-09 23:47:48.019 [INFO][5136] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="29aabc6291030aa2b48dd24540d0af645fe93a8138bede21902a13aa21919a47" HandleID="k8s-pod-network.29aabc6291030aa2b48dd24540d0af645fe93a8138bede21902a13aa21919a47" Workload="ci--4344.1.1--n--5de0cd73c3-k8s-coredns--7c65d6cfc9--qhgc2-eth0" Jul 9 23:47:48.125592 containerd[1923]: 2025-07-09 23:47:48.019 [INFO][5136] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="29aabc6291030aa2b48dd24540d0af645fe93a8138bede21902a13aa21919a47" HandleID="k8s-pod-network.29aabc6291030aa2b48dd24540d0af645fe93a8138bede21902a13aa21919a47" Workload="ci--4344.1.1--n--5de0cd73c3-k8s-coredns--7c65d6cfc9--qhgc2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b7b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.1.1-n-5de0cd73c3", "pod":"coredns-7c65d6cfc9-qhgc2", "timestamp":"2025-07-09 23:47:48.019856104 +0000 UTC"}, Hostname:"ci-4344.1.1-n-5de0cd73c3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 23:47:48.125592 containerd[1923]: 2025-07-09 23:47:48.020 [INFO][5136] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 23:47:48.125592 containerd[1923]: 2025-07-09 23:47:48.020 [INFO][5136] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 23:47:48.125592 containerd[1923]: 2025-07-09 23:47:48.020 [INFO][5136] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-n-5de0cd73c3' Jul 9 23:47:48.125592 containerd[1923]: 2025-07-09 23:47:48.035 [INFO][5136] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.29aabc6291030aa2b48dd24540d0af645fe93a8138bede21902a13aa21919a47" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:48.125592 containerd[1923]: 2025-07-09 23:47:48.042 [INFO][5136] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:48.125592 containerd[1923]: 2025-07-09 23:47:48.051 [INFO][5136] ipam/ipam.go 511: Trying affinity for 192.168.27.192/26 host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:48.125592 containerd[1923]: 2025-07-09 23:47:48.053 [INFO][5136] ipam/ipam.go 158: Attempting to load block cidr=192.168.27.192/26 host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:48.125592 containerd[1923]: 2025-07-09 23:47:48.056 [INFO][5136] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.27.192/26 host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:48.125725 containerd[1923]: 2025-07-09 23:47:48.056 [INFO][5136] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.27.192/26 handle="k8s-pod-network.29aabc6291030aa2b48dd24540d0af645fe93a8138bede21902a13aa21919a47" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:48.125725 containerd[1923]: 2025-07-09 23:47:48.062 [INFO][5136] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.29aabc6291030aa2b48dd24540d0af645fe93a8138bede21902a13aa21919a47 Jul 9 23:47:48.125725 containerd[1923]: 2025-07-09 23:47:48.067 [INFO][5136] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.27.192/26 handle="k8s-pod-network.29aabc6291030aa2b48dd24540d0af645fe93a8138bede21902a13aa21919a47" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:48.125725 containerd[1923]: 2025-07-09 23:47:48.088 [INFO][5136] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.27.194/26] block=192.168.27.192/26 handle="k8s-pod-network.29aabc6291030aa2b48dd24540d0af645fe93a8138bede21902a13aa21919a47" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:48.125725 containerd[1923]: 2025-07-09 23:47:48.088 [INFO][5136] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.27.194/26] handle="k8s-pod-network.29aabc6291030aa2b48dd24540d0af645fe93a8138bede21902a13aa21919a47" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:48.125725 containerd[1923]: 2025-07-09 23:47:48.088 [INFO][5136] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 23:47:48.125725 containerd[1923]: 2025-07-09 23:47:48.089 [INFO][5136] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.27.194/26] IPv6=[] ContainerID="29aabc6291030aa2b48dd24540d0af645fe93a8138bede21902a13aa21919a47" HandleID="k8s-pod-network.29aabc6291030aa2b48dd24540d0af645fe93a8138bede21902a13aa21919a47" Workload="ci--4344.1.1--n--5de0cd73c3-k8s-coredns--7c65d6cfc9--qhgc2-eth0" Jul 9 23:47:48.125851 containerd[1923]: 2025-07-09 23:47:48.092 [INFO][5102] cni-plugin/k8s.go 418: Populated endpoint ContainerID="29aabc6291030aa2b48dd24540d0af645fe93a8138bede21902a13aa21919a47" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qhgc2" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-coredns--7c65d6cfc9--qhgc2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5de0cd73c3-k8s-coredns--7c65d6cfc9--qhgc2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"41443e43-d633-4a6f-8353-8ac0a7cc324a", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 46, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5de0cd73c3", ContainerID:"", Pod:"coredns-7c65d6cfc9-qhgc2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.27.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid79f6d1b211", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:47:48.125851 containerd[1923]: 2025-07-09 23:47:48.092 [INFO][5102] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.27.194/32] ContainerID="29aabc6291030aa2b48dd24540d0af645fe93a8138bede21902a13aa21919a47" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qhgc2" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-coredns--7c65d6cfc9--qhgc2-eth0" Jul 9 23:47:48.125851 containerd[1923]: 2025-07-09 23:47:48.092 [INFO][5102] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid79f6d1b211 ContainerID="29aabc6291030aa2b48dd24540d0af645fe93a8138bede21902a13aa21919a47" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qhgc2" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-coredns--7c65d6cfc9--qhgc2-eth0" Jul 9 23:47:48.125851 containerd[1923]: 2025-07-09 23:47:48.097 [INFO][5102] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="29aabc6291030aa2b48dd24540d0af645fe93a8138bede21902a13aa21919a47" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qhgc2" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-coredns--7c65d6cfc9--qhgc2-eth0" Jul 9 23:47:48.125851 containerd[1923]: 2025-07-09 23:47:48.097 [INFO][5102] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="29aabc6291030aa2b48dd24540d0af645fe93a8138bede21902a13aa21919a47" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qhgc2" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-coredns--7c65d6cfc9--qhgc2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5de0cd73c3-k8s-coredns--7c65d6cfc9--qhgc2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"41443e43-d633-4a6f-8353-8ac0a7cc324a", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 46, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5de0cd73c3", ContainerID:"29aabc6291030aa2b48dd24540d0af645fe93a8138bede21902a13aa21919a47", Pod:"coredns-7c65d6cfc9-qhgc2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.27.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid79f6d1b211", MAC:"7e:cb:66:b6:fa:9d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:47:48.125851 containerd[1923]: 2025-07-09 23:47:48.123 [INFO][5102] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="29aabc6291030aa2b48dd24540d0af645fe93a8138bede21902a13aa21919a47" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qhgc2" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-coredns--7c65d6cfc9--qhgc2-eth0" Jul 9 23:47:48.126870 kubelet[3395]: I0709 23:47:48.126066 3395 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-8fcfc8b5f-jnz4b" podStartSLOduration=6.303898605 podStartE2EDuration="10.126049099s" podCreationTimestamp="2025-07-09 23:47:38 +0000 UTC" firstStartedPulling="2025-07-09 23:47:38.860551052 +0000 UTC m=+47.037607966" lastFinishedPulling="2025-07-09 23:47:42.682701546 +0000 UTC m=+50.859758460" observedRunningTime="2025-07-09 23:47:43.043328555 +0000 UTC m=+51.220385485" watchObservedRunningTime="2025-07-09 23:47:48.126049099 +0000 UTC m=+56.303106013" Jul 9 23:47:48.213608 containerd[1923]: time="2025-07-09T23:47:48.212835872Z" level=info msg="connecting to shim 29aabc6291030aa2b48dd24540d0af645fe93a8138bede21902a13aa21919a47" address="unix:///run/containerd/s/1958bc03f1407648a235c10a059f2a5111a3aa07747763f842df413096dc29be" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:47:48.215608 systemd-networkd[1490]: cali32fdb401af6: Link UP Jul 9 23:47:48.218992 systemd-networkd[1490]: cali32fdb401af6: Gained carrier Jul 9 23:47:48.245308 systemd[1]: Started cri-containerd-29aabc6291030aa2b48dd24540d0af645fe93a8138bede21902a13aa21919a47.scope - libcontainer container 29aabc6291030aa2b48dd24540d0af645fe93a8138bede21902a13aa21919a47. Jul 9 23:47:48.252841 containerd[1923]: 2025-07-09 23:47:47.992 [INFO][5112] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--n--5de0cd73c3-k8s-goldmane--58fd7646b9--cpv7t-eth0 goldmane-58fd7646b9- calico-system a1f01476-8864-45e5-a57f-ac547a07aedb 819 0 2025-07-09 23:47:10 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4344.1.1-n-5de0cd73c3 goldmane-58fd7646b9-cpv7t eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali32fdb401af6 [] [] }} ContainerID="c0b8dcf1eabd42214b04dc71663a27ed7c1898a793ace0abf8a78a74bccd90b4" Namespace="calico-system" Pod="goldmane-58fd7646b9-cpv7t" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-goldmane--58fd7646b9--cpv7t-" Jul 9 23:47:48.252841 containerd[1923]: 2025-07-09 23:47:47.993 [INFO][5112] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0b8dcf1eabd42214b04dc71663a27ed7c1898a793ace0abf8a78a74bccd90b4" Namespace="calico-system" Pod="goldmane-58fd7646b9-cpv7t" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-goldmane--58fd7646b9--cpv7t-eth0" Jul 9 23:47:48.252841 containerd[1923]: 2025-07-09 23:47:48.047 [INFO][5145] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0b8dcf1eabd42214b04dc71663a27ed7c1898a793ace0abf8a78a74bccd90b4" HandleID="k8s-pod-network.c0b8dcf1eabd42214b04dc71663a27ed7c1898a793ace0abf8a78a74bccd90b4" Workload="ci--4344.1.1--n--5de0cd73c3-k8s-goldmane--58fd7646b9--cpv7t-eth0" Jul 9 23:47:48.252841 containerd[1923]: 2025-07-09 23:47:48.047 [INFO][5145] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c0b8dcf1eabd42214b04dc71663a27ed7c1898a793ace0abf8a78a74bccd90b4" HandleID="k8s-pod-network.c0b8dcf1eabd42214b04dc71663a27ed7c1898a793ace0abf8a78a74bccd90b4" Workload="ci--4344.1.1--n--5de0cd73c3-k8s-goldmane--58fd7646b9--cpv7t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3860), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.1-n-5de0cd73c3", "pod":"goldmane-58fd7646b9-cpv7t", "timestamp":"2025-07-09 23:47:48.047525635 +0000 UTC"}, Hostname:"ci-4344.1.1-n-5de0cd73c3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 23:47:48.252841 containerd[1923]: 2025-07-09 23:47:48.047 [INFO][5145] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 23:47:48.252841 containerd[1923]: 2025-07-09 23:47:48.089 [INFO][5145] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 23:47:48.252841 containerd[1923]: 2025-07-09 23:47:48.089 [INFO][5145] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-n-5de0cd73c3' Jul 9 23:47:48.252841 containerd[1923]: 2025-07-09 23:47:48.144 [INFO][5145] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c0b8dcf1eabd42214b04dc71663a27ed7c1898a793ace0abf8a78a74bccd90b4" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:48.252841 containerd[1923]: 2025-07-09 23:47:48.149 [INFO][5145] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:48.252841 containerd[1923]: 2025-07-09 23:47:48.153 [INFO][5145] ipam/ipam.go 511: Trying affinity for 192.168.27.192/26 host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:48.252841 containerd[1923]: 2025-07-09 23:47:48.155 [INFO][5145] ipam/ipam.go 158: Attempting to load block cidr=192.168.27.192/26 host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:48.252841 containerd[1923]: 2025-07-09 23:47:48.164 [INFO][5145] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.27.192/26 host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:48.252841 containerd[1923]: 2025-07-09 23:47:48.164 [INFO][5145] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.27.192/26 handle="k8s-pod-network.c0b8dcf1eabd42214b04dc71663a27ed7c1898a793ace0abf8a78a74bccd90b4" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:48.252841 containerd[1923]: 2025-07-09 23:47:48.166 [INFO][5145] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c0b8dcf1eabd42214b04dc71663a27ed7c1898a793ace0abf8a78a74bccd90b4 Jul 9 23:47:48.252841 containerd[1923]: 2025-07-09 23:47:48.171 [INFO][5145] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.27.192/26 handle="k8s-pod-network.c0b8dcf1eabd42214b04dc71663a27ed7c1898a793ace0abf8a78a74bccd90b4" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:48.252841 containerd[1923]: 2025-07-09 23:47:48.198 [INFO][5145] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.27.195/26] block=192.168.27.192/26 handle="k8s-pod-network.c0b8dcf1eabd42214b04dc71663a27ed7c1898a793ace0abf8a78a74bccd90b4" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:48.252841 containerd[1923]: 2025-07-09 23:47:48.198 [INFO][5145] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.27.195/26] handle="k8s-pod-network.c0b8dcf1eabd42214b04dc71663a27ed7c1898a793ace0abf8a78a74bccd90b4" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:48.252841 containerd[1923]: 2025-07-09 23:47:48.198 [INFO][5145] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 23:47:48.252841 containerd[1923]: 2025-07-09 23:47:48.198 [INFO][5145] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.27.195/26] IPv6=[] ContainerID="c0b8dcf1eabd42214b04dc71663a27ed7c1898a793ace0abf8a78a74bccd90b4" HandleID="k8s-pod-network.c0b8dcf1eabd42214b04dc71663a27ed7c1898a793ace0abf8a78a74bccd90b4" Workload="ci--4344.1.1--n--5de0cd73c3-k8s-goldmane--58fd7646b9--cpv7t-eth0" Jul 9 23:47:48.253920 containerd[1923]: 2025-07-09 23:47:48.200 [INFO][5112] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0b8dcf1eabd42214b04dc71663a27ed7c1898a793ace0abf8a78a74bccd90b4" Namespace="calico-system" Pod="goldmane-58fd7646b9-cpv7t" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-goldmane--58fd7646b9--cpv7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5de0cd73c3-k8s-goldmane--58fd7646b9--cpv7t-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"a1f01476-8864-45e5-a57f-ac547a07aedb", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 47, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5de0cd73c3", ContainerID:"", Pod:"goldmane-58fd7646b9-cpv7t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.27.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali32fdb401af6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:47:48.253920 containerd[1923]: 2025-07-09 23:47:48.201 [INFO][5112] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.27.195/32] ContainerID="c0b8dcf1eabd42214b04dc71663a27ed7c1898a793ace0abf8a78a74bccd90b4" Namespace="calico-system" Pod="goldmane-58fd7646b9-cpv7t" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-goldmane--58fd7646b9--cpv7t-eth0" Jul 9 23:47:48.253920 containerd[1923]: 2025-07-09 23:47:48.201 [INFO][5112] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32fdb401af6 ContainerID="c0b8dcf1eabd42214b04dc71663a27ed7c1898a793ace0abf8a78a74bccd90b4" Namespace="calico-system" Pod="goldmane-58fd7646b9-cpv7t" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-goldmane--58fd7646b9--cpv7t-eth0" Jul 9 23:47:48.253920 containerd[1923]: 2025-07-09 23:47:48.223 [INFO][5112] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0b8dcf1eabd42214b04dc71663a27ed7c1898a793ace0abf8a78a74bccd90b4" Namespace="calico-system" Pod="goldmane-58fd7646b9-cpv7t" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-goldmane--58fd7646b9--cpv7t-eth0" Jul 9 23:47:48.253920 containerd[1923]: 2025-07-09 23:47:48.224 [INFO][5112] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0b8dcf1eabd42214b04dc71663a27ed7c1898a793ace0abf8a78a74bccd90b4" Namespace="calico-system" Pod="goldmane-58fd7646b9-cpv7t" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-goldmane--58fd7646b9--cpv7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5de0cd73c3-k8s-goldmane--58fd7646b9--cpv7t-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"a1f01476-8864-45e5-a57f-ac547a07aedb", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 47, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5de0cd73c3", ContainerID:"c0b8dcf1eabd42214b04dc71663a27ed7c1898a793ace0abf8a78a74bccd90b4", Pod:"goldmane-58fd7646b9-cpv7t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.27.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali32fdb401af6", MAC:"9e:14:90:a9:83:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:47:48.253920 containerd[1923]: 2025-07-09 23:47:48.249 [INFO][5112] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0b8dcf1eabd42214b04dc71663a27ed7c1898a793ace0abf8a78a74bccd90b4" Namespace="calico-system" Pod="goldmane-58fd7646b9-cpv7t" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-goldmane--58fd7646b9--cpv7t-eth0" Jul 9 23:47:48.311375 systemd-networkd[1490]: calie87fdee6374: Link UP Jul 9 23:47:48.311643 systemd-networkd[1490]: calie87fdee6374: Gained carrier Jul 9 23:47:48.319210 containerd[1923]: time="2025-07-09T23:47:48.319002235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qhgc2,Uid:41443e43-d633-4a6f-8353-8ac0a7cc324a,Namespace:kube-system,Attempt:0,} returns sandbox id \"29aabc6291030aa2b48dd24540d0af645fe93a8138bede21902a13aa21919a47\"" Jul 9 23:47:48.328283 containerd[1923]: time="2025-07-09T23:47:48.327261283Z" level=info msg="CreateContainer within sandbox \"29aabc6291030aa2b48dd24540d0af645fe93a8138bede21902a13aa21919a47\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 23:47:48.334139 containerd[1923]: time="2025-07-09T23:47:48.334109884Z" level=info msg="connecting to shim c0b8dcf1eabd42214b04dc71663a27ed7c1898a793ace0abf8a78a74bccd90b4" address="unix:///run/containerd/s/714e6762cf61c86ebfa3b47d78c75b904bd4fd0861bfabaa8c32f1c34036139d" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:47:48.334239 containerd[1923]: 2025-07-09 23:47:48.013 [INFO][5123] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--n--5de0cd73c3-k8s-calico--apiserver--dd99d6fcb--jl4z7-eth0 calico-apiserver-dd99d6fcb- calico-apiserver 774cf081-2fd4-4dbe-815a-60b8b7aa36b2 822 0 2025-07-09 23:47:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:dd99d6fcb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.1-n-5de0cd73c3 calico-apiserver-dd99d6fcb-jl4z7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie87fdee6374 [] [] }} ContainerID="51b9922ad3a02da0861d3e1f67ae4bc9d86ad99fe628938cf0074bfe471eb3aa" Namespace="calico-apiserver" Pod="calico-apiserver-dd99d6fcb-jl4z7" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-calico--apiserver--dd99d6fcb--jl4z7-" Jul 9 23:47:48.334239 containerd[1923]: 2025-07-09 23:47:48.013 [INFO][5123] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="51b9922ad3a02da0861d3e1f67ae4bc9d86ad99fe628938cf0074bfe471eb3aa" Namespace="calico-apiserver" Pod="calico-apiserver-dd99d6fcb-jl4z7" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-calico--apiserver--dd99d6fcb--jl4z7-eth0" Jul 9 23:47:48.334239 containerd[1923]: 2025-07-09 23:47:48.061 [INFO][5152] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="51b9922ad3a02da0861d3e1f67ae4bc9d86ad99fe628938cf0074bfe471eb3aa" HandleID="k8s-pod-network.51b9922ad3a02da0861d3e1f67ae4bc9d86ad99fe628938cf0074bfe471eb3aa" Workload="ci--4344.1.1--n--5de0cd73c3-k8s-calico--apiserver--dd99d6fcb--jl4z7-eth0" Jul 9 23:47:48.334239 containerd[1923]: 2025-07-09 23:47:48.062 [INFO][5152] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="51b9922ad3a02da0861d3e1f67ae4bc9d86ad99fe628938cf0074bfe471eb3aa" HandleID="k8s-pod-network.51b9922ad3a02da0861d3e1f67ae4bc9d86ad99fe628938cf0074bfe471eb3aa" Workload="ci--4344.1.1--n--5de0cd73c3-k8s-calico--apiserver--dd99d6fcb--jl4z7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b730), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.1-n-5de0cd73c3", "pod":"calico-apiserver-dd99d6fcb-jl4z7", "timestamp":"2025-07-09 23:47:48.061963184 +0000 UTC"}, Hostname:"ci-4344.1.1-n-5de0cd73c3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 23:47:48.334239 containerd[1923]: 2025-07-09 23:47:48.062 [INFO][5152] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 23:47:48.334239 containerd[1923]: 2025-07-09 23:47:48.198 [INFO][5152] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 23:47:48.334239 containerd[1923]: 2025-07-09 23:47:48.199 [INFO][5152] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-n-5de0cd73c3' Jul 9 23:47:48.334239 containerd[1923]: 2025-07-09 23:47:48.249 [INFO][5152] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.51b9922ad3a02da0861d3e1f67ae4bc9d86ad99fe628938cf0074bfe471eb3aa" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:48.334239 containerd[1923]: 2025-07-09 23:47:48.258 [INFO][5152] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:48.334239 containerd[1923]: 2025-07-09 23:47:48.263 [INFO][5152] ipam/ipam.go 511: Trying affinity for 192.168.27.192/26 host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:48.334239 containerd[1923]: 2025-07-09 23:47:48.265 [INFO][5152] ipam/ipam.go 158: Attempting to load block cidr=192.168.27.192/26 host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:48.334239 containerd[1923]: 2025-07-09 23:47:48.269 [INFO][5152] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.27.192/26 host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:48.334239 containerd[1923]: 2025-07-09 23:47:48.269 [INFO][5152] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.27.192/26 handle="k8s-pod-network.51b9922ad3a02da0861d3e1f67ae4bc9d86ad99fe628938cf0074bfe471eb3aa" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:48.334239 containerd[1923]: 2025-07-09 23:47:48.272 [INFO][5152] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.51b9922ad3a02da0861d3e1f67ae4bc9d86ad99fe628938cf0074bfe471eb3aa Jul 9 23:47:48.334239 containerd[1923]: 2025-07-09 23:47:48.283 [INFO][5152] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.27.192/26 handle="k8s-pod-network.51b9922ad3a02da0861d3e1f67ae4bc9d86ad99fe628938cf0074bfe471eb3aa" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:48.334239 containerd[1923]: 2025-07-09 23:47:48.301 [INFO][5152] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.27.196/26] block=192.168.27.192/26 handle="k8s-pod-network.51b9922ad3a02da0861d3e1f67ae4bc9d86ad99fe628938cf0074bfe471eb3aa" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:48.334239 containerd[1923]: 2025-07-09 23:47:48.301 [INFO][5152] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.27.196/26] handle="k8s-pod-network.51b9922ad3a02da0861d3e1f67ae4bc9d86ad99fe628938cf0074bfe471eb3aa" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:48.334239 containerd[1923]: 2025-07-09 23:47:48.301 [INFO][5152] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 23:47:48.334239 containerd[1923]: 2025-07-09 23:47:48.301 [INFO][5152] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.27.196/26] IPv6=[] ContainerID="51b9922ad3a02da0861d3e1f67ae4bc9d86ad99fe628938cf0074bfe471eb3aa" HandleID="k8s-pod-network.51b9922ad3a02da0861d3e1f67ae4bc9d86ad99fe628938cf0074bfe471eb3aa" Workload="ci--4344.1.1--n--5de0cd73c3-k8s-calico--apiserver--dd99d6fcb--jl4z7-eth0" Jul 9 23:47:48.334596 containerd[1923]: 2025-07-09 23:47:48.305 [INFO][5123] cni-plugin/k8s.go 418: Populated endpoint ContainerID="51b9922ad3a02da0861d3e1f67ae4bc9d86ad99fe628938cf0074bfe471eb3aa" Namespace="calico-apiserver" Pod="calico-apiserver-dd99d6fcb-jl4z7" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-calico--apiserver--dd99d6fcb--jl4z7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5de0cd73c3-k8s-calico--apiserver--dd99d6fcb--jl4z7-eth0", GenerateName:"calico-apiserver-dd99d6fcb-", Namespace:"calico-apiserver", SelfLink:"", UID:"774cf081-2fd4-4dbe-815a-60b8b7aa36b2", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 47, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd99d6fcb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5de0cd73c3", ContainerID:"", Pod:"calico-apiserver-dd99d6fcb-jl4z7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.27.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie87fdee6374", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:47:48.334596 containerd[1923]: 2025-07-09 23:47:48.305 [INFO][5123] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.27.196/32] ContainerID="51b9922ad3a02da0861d3e1f67ae4bc9d86ad99fe628938cf0074bfe471eb3aa" Namespace="calico-apiserver" Pod="calico-apiserver-dd99d6fcb-jl4z7" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-calico--apiserver--dd99d6fcb--jl4z7-eth0" Jul 9 23:47:48.334596 containerd[1923]: 2025-07-09 23:47:48.305 [INFO][5123] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie87fdee6374 ContainerID="51b9922ad3a02da0861d3e1f67ae4bc9d86ad99fe628938cf0074bfe471eb3aa" Namespace="calico-apiserver" Pod="calico-apiserver-dd99d6fcb-jl4z7" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-calico--apiserver--dd99d6fcb--jl4z7-eth0" Jul 9 23:47:48.334596 containerd[1923]: 2025-07-09 23:47:48.312 [INFO][5123] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="51b9922ad3a02da0861d3e1f67ae4bc9d86ad99fe628938cf0074bfe471eb3aa" Namespace="calico-apiserver" Pod="calico-apiserver-dd99d6fcb-jl4z7" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-calico--apiserver--dd99d6fcb--jl4z7-eth0" Jul 9 23:47:48.334596 containerd[1923]: 2025-07-09 23:47:48.313 [INFO][5123] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="51b9922ad3a02da0861d3e1f67ae4bc9d86ad99fe628938cf0074bfe471eb3aa" Namespace="calico-apiserver" Pod="calico-apiserver-dd99d6fcb-jl4z7" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-calico--apiserver--dd99d6fcb--jl4z7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5de0cd73c3-k8s-calico--apiserver--dd99d6fcb--jl4z7-eth0", GenerateName:"calico-apiserver-dd99d6fcb-", Namespace:"calico-apiserver", SelfLink:"", UID:"774cf081-2fd4-4dbe-815a-60b8b7aa36b2", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 47, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd99d6fcb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5de0cd73c3", ContainerID:"51b9922ad3a02da0861d3e1f67ae4bc9d86ad99fe628938cf0074bfe471eb3aa", Pod:"calico-apiserver-dd99d6fcb-jl4z7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.27.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie87fdee6374", MAC:"3a:d3:0c:37:70:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:47:48.334596 containerd[1923]: 2025-07-09 23:47:48.329 [INFO][5123] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="51b9922ad3a02da0861d3e1f67ae4bc9d86ad99fe628938cf0074bfe471eb3aa" Namespace="calico-apiserver" Pod="calico-apiserver-dd99d6fcb-jl4z7" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-calico--apiserver--dd99d6fcb--jl4z7-eth0" Jul 9 23:47:48.355414 systemd[1]: Started cri-containerd-c0b8dcf1eabd42214b04dc71663a27ed7c1898a793ace0abf8a78a74bccd90b4.scope - libcontainer container c0b8dcf1eabd42214b04dc71663a27ed7c1898a793ace0abf8a78a74bccd90b4. Jul 9 23:47:48.372223 containerd[1923]: time="2025-07-09T23:47:48.372167408Z" level=info msg="Container 5aad24331899853ad655db28756f2e720983f868b6c05bf6a0ea934963966ecf: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:47:48.430360 containerd[1923]: time="2025-07-09T23:47:48.430323818Z" level=info msg="CreateContainer within sandbox \"29aabc6291030aa2b48dd24540d0af645fe93a8138bede21902a13aa21919a47\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5aad24331899853ad655db28756f2e720983f868b6c05bf6a0ea934963966ecf\"" Jul 9 23:47:48.431247 containerd[1923]: time="2025-07-09T23:47:48.431109700Z" level=info msg="StartContainer for \"5aad24331899853ad655db28756f2e720983f868b6c05bf6a0ea934963966ecf\"" Jul 9 23:47:48.437241 containerd[1923]: time="2025-07-09T23:47:48.437216340Z" level=info msg="connecting to shim 5aad24331899853ad655db28756f2e720983f868b6c05bf6a0ea934963966ecf" address="unix:///run/containerd/s/1958bc03f1407648a235c10a059f2a5111a3aa07747763f842df413096dc29be" protocol=ttrpc version=3 Jul 9 23:47:48.451038 containerd[1923]: time="2025-07-09T23:47:48.450994730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-cpv7t,Uid:a1f01476-8864-45e5-a57f-ac547a07aedb,Namespace:calico-system,Attempt:0,} returns sandbox id \"c0b8dcf1eabd42214b04dc71663a27ed7c1898a793ace0abf8a78a74bccd90b4\"" Jul 9 23:47:48.451348 systemd[1]: Started cri-containerd-5aad24331899853ad655db28756f2e720983f868b6c05bf6a0ea934963966ecf.scope - libcontainer container 5aad24331899853ad655db28756f2e720983f868b6c05bf6a0ea934963966ecf. Jul 9 23:47:48.453240 containerd[1923]: time="2025-07-09T23:47:48.453218811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 9 23:47:48.495448 containerd[1923]: time="2025-07-09T23:47:48.495418583Z" level=info msg="connecting to shim 51b9922ad3a02da0861d3e1f67ae4bc9d86ad99fe628938cf0074bfe471eb3aa" address="unix:///run/containerd/s/61a60675a30ea7f119cf002ba086ec393122bc011402ac5ca08c7c8f56d4eb3e" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:47:48.514419 containerd[1923]: time="2025-07-09T23:47:48.514296668Z" level=info msg="StartContainer for \"5aad24331899853ad655db28756f2e720983f868b6c05bf6a0ea934963966ecf\" returns successfully" Jul 9 23:47:48.525404 systemd[1]: Started cri-containerd-51b9922ad3a02da0861d3e1f67ae4bc9d86ad99fe628938cf0074bfe471eb3aa.scope - libcontainer container 51b9922ad3a02da0861d3e1f67ae4bc9d86ad99fe628938cf0074bfe471eb3aa. Jul 9 23:47:48.564731 containerd[1923]: time="2025-07-09T23:47:48.564701367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd99d6fcb-jl4z7,Uid:774cf081-2fd4-4dbe-815a-60b8b7aa36b2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"51b9922ad3a02da0861d3e1f67ae4bc9d86ad99fe628938cf0074bfe471eb3aa\"" Jul 9 23:47:48.884095 containerd[1923]: time="2025-07-09T23:47:48.883753727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ff74f4fb8-w9w6s,Uid:5351ad3a-79dc-4ade-b519-9b8bd32331ed,Namespace:calico-system,Attempt:0,}" Jul 9 23:47:48.884095 containerd[1923]: time="2025-07-09T23:47:48.883963294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vn5rz,Uid:1652e582-9df7-47e7-ad42-d56cc98d0a95,Namespace:calico-system,Attempt:0,}" Jul 9 23:47:48.998252 systemd-networkd[1490]: cali60f3b5d0745: Link UP Jul 9 23:47:48.998829 systemd-networkd[1490]: cali60f3b5d0745: Gained carrier Jul 9 23:47:49.015727 containerd[1923]: 2025-07-09 23:47:48.930 [INFO][5365] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--n--5de0cd73c3-k8s-calico--kube--controllers--6ff74f4fb8--w9w6s-eth0 calico-kube-controllers-6ff74f4fb8- calico-system 5351ad3a-79dc-4ade-b519-9b8bd32331ed 811 0 2025-07-09 23:47:11 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6ff74f4fb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4344.1.1-n-5de0cd73c3 calico-kube-controllers-6ff74f4fb8-w9w6s eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali60f3b5d0745 [] [] }} ContainerID="4e0b8fecfad4589d9ac7a2469368670de1b87e359b3f1f7cec8c4fbc16c3c61e" Namespace="calico-system" Pod="calico-kube-controllers-6ff74f4fb8-w9w6s" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-calico--kube--controllers--6ff74f4fb8--w9w6s-" Jul 9 23:47:49.015727 containerd[1923]: 2025-07-09 23:47:48.930 [INFO][5365] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4e0b8fecfad4589d9ac7a2469368670de1b87e359b3f1f7cec8c4fbc16c3c61e" Namespace="calico-system" Pod="calico-kube-controllers-6ff74f4fb8-w9w6s" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-calico--kube--controllers--6ff74f4fb8--w9w6s-eth0" Jul 9 23:47:49.015727 containerd[1923]: 2025-07-09 23:47:48.952 [INFO][5387] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e0b8fecfad4589d9ac7a2469368670de1b87e359b3f1f7cec8c4fbc16c3c61e" HandleID="k8s-pod-network.4e0b8fecfad4589d9ac7a2469368670de1b87e359b3f1f7cec8c4fbc16c3c61e" Workload="ci--4344.1.1--n--5de0cd73c3-k8s-calico--kube--controllers--6ff74f4fb8--w9w6s-eth0" Jul 9 23:47:49.015727 containerd[1923]: 2025-07-09 23:47:48.952 [INFO][5387] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4e0b8fecfad4589d9ac7a2469368670de1b87e359b3f1f7cec8c4fbc16c3c61e" HandleID="k8s-pod-network.4e0b8fecfad4589d9ac7a2469368670de1b87e359b3f1f7cec8c4fbc16c3c61e" Workload="ci--4344.1.1--n--5de0cd73c3-k8s-calico--kube--controllers--6ff74f4fb8--w9w6s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002abf40), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.1-n-5de0cd73c3", "pod":"calico-kube-controllers-6ff74f4fb8-w9w6s", "timestamp":"2025-07-09 23:47:48.952329648 +0000 UTC"}, Hostname:"ci-4344.1.1-n-5de0cd73c3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 23:47:49.015727 containerd[1923]: 2025-07-09 23:47:48.952 [INFO][5387] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 23:47:49.015727 containerd[1923]: 2025-07-09 23:47:48.952 [INFO][5387] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 23:47:49.015727 containerd[1923]: 2025-07-09 23:47:48.952 [INFO][5387] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-n-5de0cd73c3' Jul 9 23:47:49.015727 containerd[1923]: 2025-07-09 23:47:48.958 [INFO][5387] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4e0b8fecfad4589d9ac7a2469368670de1b87e359b3f1f7cec8c4fbc16c3c61e" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:49.015727 containerd[1923]: 2025-07-09 23:47:48.962 [INFO][5387] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:49.015727 containerd[1923]: 2025-07-09 23:47:48.967 [INFO][5387] ipam/ipam.go 511: Trying affinity for 192.168.27.192/26 host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:49.015727 containerd[1923]: 2025-07-09 23:47:48.970 [INFO][5387] ipam/ipam.go 158: Attempting to load block cidr=192.168.27.192/26 host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:49.015727 containerd[1923]: 2025-07-09 23:47:48.973 [INFO][5387] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.27.192/26 host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:49.015727 containerd[1923]: 2025-07-09 23:47:48.973 [INFO][5387] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.27.192/26 handle="k8s-pod-network.4e0b8fecfad4589d9ac7a2469368670de1b87e359b3f1f7cec8c4fbc16c3c61e" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:49.015727 containerd[1923]: 2025-07-09 23:47:48.974 [INFO][5387] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4e0b8fecfad4589d9ac7a2469368670de1b87e359b3f1f7cec8c4fbc16c3c61e Jul 9 23:47:49.015727 containerd[1923]: 2025-07-09 23:47:48.983 [INFO][5387] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.27.192/26 handle="k8s-pod-network.4e0b8fecfad4589d9ac7a2469368670de1b87e359b3f1f7cec8c4fbc16c3c61e" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:49.015727 containerd[1923]: 2025-07-09 23:47:48.990 [INFO][5387] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.27.197/26] block=192.168.27.192/26 handle="k8s-pod-network.4e0b8fecfad4589d9ac7a2469368670de1b87e359b3f1f7cec8c4fbc16c3c61e" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:49.015727 containerd[1923]: 2025-07-09 23:47:48.990 [INFO][5387] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.27.197/26] handle="k8s-pod-network.4e0b8fecfad4589d9ac7a2469368670de1b87e359b3f1f7cec8c4fbc16c3c61e" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:49.015727 containerd[1923]: 2025-07-09 23:47:48.990 [INFO][5387] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 23:47:49.015727 containerd[1923]: 2025-07-09 23:47:48.990 [INFO][5387] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.27.197/26] IPv6=[] ContainerID="4e0b8fecfad4589d9ac7a2469368670de1b87e359b3f1f7cec8c4fbc16c3c61e" HandleID="k8s-pod-network.4e0b8fecfad4589d9ac7a2469368670de1b87e359b3f1f7cec8c4fbc16c3c61e" Workload="ci--4344.1.1--n--5de0cd73c3-k8s-calico--kube--controllers--6ff74f4fb8--w9w6s-eth0" Jul 9 23:47:49.017076 containerd[1923]: 2025-07-09 23:47:48.994 [INFO][5365] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4e0b8fecfad4589d9ac7a2469368670de1b87e359b3f1f7cec8c4fbc16c3c61e" Namespace="calico-system" Pod="calico-kube-controllers-6ff74f4fb8-w9w6s" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-calico--kube--controllers--6ff74f4fb8--w9w6s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5de0cd73c3-k8s-calico--kube--controllers--6ff74f4fb8--w9w6s-eth0", GenerateName:"calico-kube-controllers-6ff74f4fb8-", Namespace:"calico-system", SelfLink:"", UID:"5351ad3a-79dc-4ade-b519-9b8bd32331ed", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 47, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6ff74f4fb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5de0cd73c3", ContainerID:"", Pod:"calico-kube-controllers-6ff74f4fb8-w9w6s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.27.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali60f3b5d0745", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:47:49.017076 containerd[1923]: 2025-07-09 23:47:48.994 [INFO][5365] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.27.197/32] ContainerID="4e0b8fecfad4589d9ac7a2469368670de1b87e359b3f1f7cec8c4fbc16c3c61e" Namespace="calico-system" Pod="calico-kube-controllers-6ff74f4fb8-w9w6s" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-calico--kube--controllers--6ff74f4fb8--w9w6s-eth0" Jul 9 23:47:49.017076 containerd[1923]: 2025-07-09 23:47:48.994 [INFO][5365] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60f3b5d0745 ContainerID="4e0b8fecfad4589d9ac7a2469368670de1b87e359b3f1f7cec8c4fbc16c3c61e" Namespace="calico-system" Pod="calico-kube-controllers-6ff74f4fb8-w9w6s" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-calico--kube--controllers--6ff74f4fb8--w9w6s-eth0" Jul 9 23:47:49.017076 containerd[1923]: 2025-07-09 23:47:48.999 [INFO][5365] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e0b8fecfad4589d9ac7a2469368670de1b87e359b3f1f7cec8c4fbc16c3c61e" Namespace="calico-system" Pod="calico-kube-controllers-6ff74f4fb8-w9w6s" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-calico--kube--controllers--6ff74f4fb8--w9w6s-eth0" Jul 9 23:47:49.017076 containerd[1923]: 2025-07-09 23:47:48.999 [INFO][5365] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4e0b8fecfad4589d9ac7a2469368670de1b87e359b3f1f7cec8c4fbc16c3c61e" Namespace="calico-system" Pod="calico-kube-controllers-6ff74f4fb8-w9w6s" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-calico--kube--controllers--6ff74f4fb8--w9w6s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5de0cd73c3-k8s-calico--kube--controllers--6ff74f4fb8--w9w6s-eth0", GenerateName:"calico-kube-controllers-6ff74f4fb8-", Namespace:"calico-system", SelfLink:"", UID:"5351ad3a-79dc-4ade-b519-9b8bd32331ed", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 47, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6ff74f4fb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5de0cd73c3", ContainerID:"4e0b8fecfad4589d9ac7a2469368670de1b87e359b3f1f7cec8c4fbc16c3c61e", Pod:"calico-kube-controllers-6ff74f4fb8-w9w6s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.27.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali60f3b5d0745", MAC:"3e:bb:00:46:6d:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:47:49.017076 containerd[1923]: 2025-07-09 23:47:49.014 [INFO][5365] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4e0b8fecfad4589d9ac7a2469368670de1b87e359b3f1f7cec8c4fbc16c3c61e" Namespace="calico-system" Pod="calico-kube-controllers-6ff74f4fb8-w9w6s" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-calico--kube--controllers--6ff74f4fb8--w9w6s-eth0" Jul 9 23:47:49.057720 kubelet[3395]: I0709 23:47:49.057429 3395 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-qhgc2" podStartSLOduration=51.057413561 podStartE2EDuration="51.057413561s" podCreationTimestamp="2025-07-09 23:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:47:49.056471106 +0000 UTC m=+57.233528052" watchObservedRunningTime="2025-07-09 23:47:49.057413561 +0000 UTC m=+57.234470475" Jul 9 23:47:49.106533 containerd[1923]: time="2025-07-09T23:47:49.106306273Z" level=info msg="connecting to shim 4e0b8fecfad4589d9ac7a2469368670de1b87e359b3f1f7cec8c4fbc16c3c61e" address="unix:///run/containerd/s/36762f1f1458ba52f066c401e03ce1b48a35e97ec3d75c4f3a1b770faa01af5e" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:47:49.130389 systemd[1]: Started cri-containerd-4e0b8fecfad4589d9ac7a2469368670de1b87e359b3f1f7cec8c4fbc16c3c61e.scope - libcontainer container 4e0b8fecfad4589d9ac7a2469368670de1b87e359b3f1f7cec8c4fbc16c3c61e. Jul 9 23:47:49.141410 systemd-networkd[1490]: cali02677d921c9: Link UP Jul 9 23:47:49.144120 systemd-networkd[1490]: cali02677d921c9: Gained carrier Jul 9 23:47:49.169433 containerd[1923]: 2025-07-09 23:47:48.946 [INFO][5379] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--n--5de0cd73c3-k8s-csi--node--driver--vn5rz-eth0 csi-node-driver- calico-system 1652e582-9df7-47e7-ad42-d56cc98d0a95 716 0 2025-07-09 23:47:11 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4344.1.1-n-5de0cd73c3 csi-node-driver-vn5rz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali02677d921c9 [] [] }} ContainerID="1289599eb1b1d2b235bd95c6b773c7e9fc63c25250a8fa0bd6988ea655c76f60" Namespace="calico-system" Pod="csi-node-driver-vn5rz" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-csi--node--driver--vn5rz-" Jul 9 23:47:49.169433 containerd[1923]: 2025-07-09 23:47:48.946 [INFO][5379] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1289599eb1b1d2b235bd95c6b773c7e9fc63c25250a8fa0bd6988ea655c76f60" Namespace="calico-system" Pod="csi-node-driver-vn5rz" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-csi--node--driver--vn5rz-eth0" Jul 9 23:47:49.169433 containerd[1923]: 2025-07-09 23:47:48.968 [INFO][5396] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1289599eb1b1d2b235bd95c6b773c7e9fc63c25250a8fa0bd6988ea655c76f60" HandleID="k8s-pod-network.1289599eb1b1d2b235bd95c6b773c7e9fc63c25250a8fa0bd6988ea655c76f60" Workload="ci--4344.1.1--n--5de0cd73c3-k8s-csi--node--driver--vn5rz-eth0" Jul 9 23:47:49.169433 containerd[1923]: 2025-07-09 23:47:48.968 [INFO][5396] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1289599eb1b1d2b235bd95c6b773c7e9fc63c25250a8fa0bd6988ea655c76f60" HandleID="k8s-pod-network.1289599eb1b1d2b235bd95c6b773c7e9fc63c25250a8fa0bd6988ea655c76f60" Workload="ci--4344.1.1--n--5de0cd73c3-k8s-csi--node--driver--vn5rz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3980), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.1-n-5de0cd73c3", "pod":"csi-node-driver-vn5rz", "timestamp":"2025-07-09 23:47:48.968332558 +0000 UTC"}, Hostname:"ci-4344.1.1-n-5de0cd73c3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 23:47:49.169433 containerd[1923]: 2025-07-09 23:47:48.968 [INFO][5396] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 23:47:49.169433 containerd[1923]: 2025-07-09 23:47:48.990 [INFO][5396] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 23:47:49.169433 containerd[1923]: 2025-07-09 23:47:48.991 [INFO][5396] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-n-5de0cd73c3' Jul 9 23:47:49.169433 containerd[1923]: 2025-07-09 23:47:49.059 [INFO][5396] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1289599eb1b1d2b235bd95c6b773c7e9fc63c25250a8fa0bd6988ea655c76f60" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:49.169433 containerd[1923]: 2025-07-09 23:47:49.073 [INFO][5396] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:49.169433 containerd[1923]: 2025-07-09 23:47:49.089 [INFO][5396] ipam/ipam.go 511: Trying affinity for 192.168.27.192/26 host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:49.169433 containerd[1923]: 2025-07-09 23:47:49.099 [INFO][5396] ipam/ipam.go 158: Attempting to load block cidr=192.168.27.192/26 host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:49.169433 containerd[1923]: 2025-07-09 23:47:49.102 [INFO][5396] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.27.192/26 host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:49.169433 containerd[1923]: 2025-07-09 23:47:49.102 [INFO][5396] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.27.192/26 handle="k8s-pod-network.1289599eb1b1d2b235bd95c6b773c7e9fc63c25250a8fa0bd6988ea655c76f60" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:49.169433 containerd[1923]: 2025-07-09 23:47:49.105 [INFO][5396] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1289599eb1b1d2b235bd95c6b773c7e9fc63c25250a8fa0bd6988ea655c76f60 Jul 9 23:47:49.169433 containerd[1923]: 2025-07-09 23:47:49.119 [INFO][5396] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.27.192/26 handle="k8s-pod-network.1289599eb1b1d2b235bd95c6b773c7e9fc63c25250a8fa0bd6988ea655c76f60" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:49.169433 containerd[1923]: 2025-07-09 23:47:49.132 [INFO][5396] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.27.198/26] block=192.168.27.192/26 handle="k8s-pod-network.1289599eb1b1d2b235bd95c6b773c7e9fc63c25250a8fa0bd6988ea655c76f60" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:49.169433 containerd[1923]: 2025-07-09 23:47:49.132 [INFO][5396] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.27.198/26] handle="k8s-pod-network.1289599eb1b1d2b235bd95c6b773c7e9fc63c25250a8fa0bd6988ea655c76f60" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:49.169433 containerd[1923]: 2025-07-09 23:47:49.132 [INFO][5396] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 23:47:49.169433 containerd[1923]: 2025-07-09 23:47:49.132 [INFO][5396] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.27.198/26] IPv6=[] ContainerID="1289599eb1b1d2b235bd95c6b773c7e9fc63c25250a8fa0bd6988ea655c76f60" HandleID="k8s-pod-network.1289599eb1b1d2b235bd95c6b773c7e9fc63c25250a8fa0bd6988ea655c76f60" Workload="ci--4344.1.1--n--5de0cd73c3-k8s-csi--node--driver--vn5rz-eth0" Jul 9 23:47:49.169817 containerd[1923]: 2025-07-09 23:47:49.138 [INFO][5379] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1289599eb1b1d2b235bd95c6b773c7e9fc63c25250a8fa0bd6988ea655c76f60" Namespace="calico-system" Pod="csi-node-driver-vn5rz" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-csi--node--driver--vn5rz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5de0cd73c3-k8s-csi--node--driver--vn5rz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1652e582-9df7-47e7-ad42-d56cc98d0a95", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 47, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5de0cd73c3", ContainerID:"", Pod:"csi-node-driver-vn5rz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.27.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali02677d921c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:47:49.169817 containerd[1923]: 2025-07-09 23:47:49.138 [INFO][5379] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.27.198/32] ContainerID="1289599eb1b1d2b235bd95c6b773c7e9fc63c25250a8fa0bd6988ea655c76f60" Namespace="calico-system" Pod="csi-node-driver-vn5rz" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-csi--node--driver--vn5rz-eth0" Jul 9 23:47:49.169817 containerd[1923]: 2025-07-09 23:47:49.138 [INFO][5379] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali02677d921c9 ContainerID="1289599eb1b1d2b235bd95c6b773c7e9fc63c25250a8fa0bd6988ea655c76f60" Namespace="calico-system" Pod="csi-node-driver-vn5rz" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-csi--node--driver--vn5rz-eth0" Jul 9 23:47:49.169817 containerd[1923]: 2025-07-09 23:47:49.145 [INFO][5379] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1289599eb1b1d2b235bd95c6b773c7e9fc63c25250a8fa0bd6988ea655c76f60" Namespace="calico-system" Pod="csi-node-driver-vn5rz" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-csi--node--driver--vn5rz-eth0" Jul 9 23:47:49.169817 containerd[1923]: 2025-07-09 23:47:49.146 [INFO][5379] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1289599eb1b1d2b235bd95c6b773c7e9fc63c25250a8fa0bd6988ea655c76f60" Namespace="calico-system" Pod="csi-node-driver-vn5rz" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-csi--node--driver--vn5rz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5de0cd73c3-k8s-csi--node--driver--vn5rz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1652e582-9df7-47e7-ad42-d56cc98d0a95", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 47, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5de0cd73c3", ContainerID:"1289599eb1b1d2b235bd95c6b773c7e9fc63c25250a8fa0bd6988ea655c76f60", Pod:"csi-node-driver-vn5rz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.27.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali02677d921c9", MAC:"3a:cb:c7:03:d3:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:47:49.169817 containerd[1923]: 2025-07-09 23:47:49.163 [INFO][5379] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1289599eb1b1d2b235bd95c6b773c7e9fc63c25250a8fa0bd6988ea655c76f60" Namespace="calico-system" Pod="csi-node-driver-vn5rz" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-csi--node--driver--vn5rz-eth0" Jul 9 23:47:49.198655 containerd[1923]: time="2025-07-09T23:47:49.198619583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ff74f4fb8-w9w6s,Uid:5351ad3a-79dc-4ade-b519-9b8bd32331ed,Namespace:calico-system,Attempt:0,} returns sandbox id \"4e0b8fecfad4589d9ac7a2469368670de1b87e359b3f1f7cec8c4fbc16c3c61e\"" Jul 9 23:47:49.276693 containerd[1923]: time="2025-07-09T23:47:49.276649358Z" level=info msg="connecting to shim 1289599eb1b1d2b235bd95c6b773c7e9fc63c25250a8fa0bd6988ea655c76f60" address="unix:///run/containerd/s/78442c0f3dedf8323c4be19acc51aa0d56173949b35819485c0050f1ac04bc65" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:47:49.302297 systemd[1]: Started cri-containerd-1289599eb1b1d2b235bd95c6b773c7e9fc63c25250a8fa0bd6988ea655c76f60.scope - libcontainer container 1289599eb1b1d2b235bd95c6b773c7e9fc63c25250a8fa0bd6988ea655c76f60. Jul 9 23:47:49.327841 containerd[1923]: time="2025-07-09T23:47:49.327765895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vn5rz,Uid:1652e582-9df7-47e7-ad42-d56cc98d0a95,Namespace:calico-system,Attempt:0,} returns sandbox id \"1289599eb1b1d2b235bd95c6b773c7e9fc63c25250a8fa0bd6988ea655c76f60\"" Jul 9 23:47:49.452352 systemd-networkd[1490]: calid79f6d1b211: Gained IPv6LL Jul 9 23:47:49.452604 systemd-networkd[1490]: calie87fdee6374: Gained IPv6LL Jul 9 23:47:49.883209 containerd[1923]: time="2025-07-09T23:47:49.883151664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd99d6fcb-klwdl,Uid:9cfb4053-28d0-4909-b1fd-e33040f413d8,Namespace:calico-apiserver,Attempt:0,}" Jul 9 23:47:49.990489 systemd-networkd[1490]: cali57878d00caa: Link UP Jul 9 23:47:49.991537 systemd-networkd[1490]: cali57878d00caa: Gained carrier Jul 9 23:47:50.014435 containerd[1923]: 2025-07-09 23:47:49.929 [INFO][5517] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--n--5de0cd73c3-k8s-calico--apiserver--dd99d6fcb--klwdl-eth0 calico-apiserver-dd99d6fcb- calico-apiserver 9cfb4053-28d0-4909-b1fd-e33040f413d8 821 0 2025-07-09 23:47:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:dd99d6fcb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.1-n-5de0cd73c3 calico-apiserver-dd99d6fcb-klwdl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali57878d00caa [] [] }} ContainerID="9f5010eaf0a33a8152251b045b5333f6713c439348cd851235e54ac11163fc1d" Namespace="calico-apiserver" Pod="calico-apiserver-dd99d6fcb-klwdl" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-calico--apiserver--dd99d6fcb--klwdl-" Jul 9 23:47:50.014435 containerd[1923]: 2025-07-09 23:47:49.929 [INFO][5517] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9f5010eaf0a33a8152251b045b5333f6713c439348cd851235e54ac11163fc1d" Namespace="calico-apiserver" Pod="calico-apiserver-dd99d6fcb-klwdl" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-calico--apiserver--dd99d6fcb--klwdl-eth0" Jul 9 23:47:50.014435 containerd[1923]: 2025-07-09 23:47:49.948 [INFO][5531] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f5010eaf0a33a8152251b045b5333f6713c439348cd851235e54ac11163fc1d" HandleID="k8s-pod-network.9f5010eaf0a33a8152251b045b5333f6713c439348cd851235e54ac11163fc1d" Workload="ci--4344.1.1--n--5de0cd73c3-k8s-calico--apiserver--dd99d6fcb--klwdl-eth0" Jul 9 23:47:50.014435 containerd[1923]: 2025-07-09 23:47:49.948 [INFO][5531] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9f5010eaf0a33a8152251b045b5333f6713c439348cd851235e54ac11163fc1d" HandleID="k8s-pod-network.9f5010eaf0a33a8152251b045b5333f6713c439348cd851235e54ac11163fc1d" Workload="ci--4344.1.1--n--5de0cd73c3-k8s-calico--apiserver--dd99d6fcb--klwdl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b130), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.1-n-5de0cd73c3", "pod":"calico-apiserver-dd99d6fcb-klwdl", "timestamp":"2025-07-09 23:47:49.948427635 +0000 UTC"}, Hostname:"ci-4344.1.1-n-5de0cd73c3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 23:47:50.014435 containerd[1923]: 2025-07-09 23:47:49.948 [INFO][5531] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 23:47:50.014435 containerd[1923]: 2025-07-09 23:47:49.948 [INFO][5531] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 23:47:50.014435 containerd[1923]: 2025-07-09 23:47:49.948 [INFO][5531] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-n-5de0cd73c3' Jul 9 23:47:50.014435 containerd[1923]: 2025-07-09 23:47:49.953 [INFO][5531] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9f5010eaf0a33a8152251b045b5333f6713c439348cd851235e54ac11163fc1d" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:50.014435 containerd[1923]: 2025-07-09 23:47:49.958 [INFO][5531] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:50.014435 containerd[1923]: 2025-07-09 23:47:49.962 [INFO][5531] ipam/ipam.go 511: Trying affinity for 192.168.27.192/26 host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:50.014435 containerd[1923]: 2025-07-09 23:47:49.963 [INFO][5531] ipam/ipam.go 158: Attempting to load block cidr=192.168.27.192/26 host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:50.014435 containerd[1923]: 2025-07-09 23:47:49.965 [INFO][5531] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.27.192/26 host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:50.014435 containerd[1923]: 2025-07-09 23:47:49.965 [INFO][5531] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.27.192/26 handle="k8s-pod-network.9f5010eaf0a33a8152251b045b5333f6713c439348cd851235e54ac11163fc1d" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:50.014435 containerd[1923]: 2025-07-09 23:47:49.967 [INFO][5531] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9f5010eaf0a33a8152251b045b5333f6713c439348cd851235e54ac11163fc1d Jul 9 23:47:50.014435 containerd[1923]: 2025-07-09 23:47:49.973 [INFO][5531] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.27.192/26 handle="k8s-pod-network.9f5010eaf0a33a8152251b045b5333f6713c439348cd851235e54ac11163fc1d" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:50.014435 containerd[1923]: 2025-07-09 23:47:49.984 [INFO][5531] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.27.199/26] block=192.168.27.192/26 handle="k8s-pod-network.9f5010eaf0a33a8152251b045b5333f6713c439348cd851235e54ac11163fc1d" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:50.014435 containerd[1923]: 2025-07-09 23:47:49.984 [INFO][5531] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.27.199/26] handle="k8s-pod-network.9f5010eaf0a33a8152251b045b5333f6713c439348cd851235e54ac11163fc1d" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:50.014435 containerd[1923]: 2025-07-09 23:47:49.984 [INFO][5531] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 23:47:50.014435 containerd[1923]: 2025-07-09 23:47:49.984 [INFO][5531] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.27.199/26] IPv6=[] ContainerID="9f5010eaf0a33a8152251b045b5333f6713c439348cd851235e54ac11163fc1d" HandleID="k8s-pod-network.9f5010eaf0a33a8152251b045b5333f6713c439348cd851235e54ac11163fc1d" Workload="ci--4344.1.1--n--5de0cd73c3-k8s-calico--apiserver--dd99d6fcb--klwdl-eth0" Jul 9 23:47:50.015959 containerd[1923]: 2025-07-09 23:47:49.986 [INFO][5517] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9f5010eaf0a33a8152251b045b5333f6713c439348cd851235e54ac11163fc1d" Namespace="calico-apiserver" Pod="calico-apiserver-dd99d6fcb-klwdl" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-calico--apiserver--dd99d6fcb--klwdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5de0cd73c3-k8s-calico--apiserver--dd99d6fcb--klwdl-eth0", GenerateName:"calico-apiserver-dd99d6fcb-", Namespace:"calico-apiserver", SelfLink:"", UID:"9cfb4053-28d0-4909-b1fd-e33040f413d8", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 47, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd99d6fcb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5de0cd73c3", ContainerID:"", Pod:"calico-apiserver-dd99d6fcb-klwdl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.27.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali57878d00caa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:47:50.015959 containerd[1923]: 2025-07-09 23:47:49.986 [INFO][5517] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.27.199/32] ContainerID="9f5010eaf0a33a8152251b045b5333f6713c439348cd851235e54ac11163fc1d" Namespace="calico-apiserver" Pod="calico-apiserver-dd99d6fcb-klwdl" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-calico--apiserver--dd99d6fcb--klwdl-eth0" Jul 9 23:47:50.015959 containerd[1923]: 2025-07-09 23:47:49.986 [INFO][5517] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali57878d00caa ContainerID="9f5010eaf0a33a8152251b045b5333f6713c439348cd851235e54ac11163fc1d" Namespace="calico-apiserver" Pod="calico-apiserver-dd99d6fcb-klwdl" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-calico--apiserver--dd99d6fcb--klwdl-eth0" Jul 9 23:47:50.015959 containerd[1923]: 2025-07-09 23:47:49.991 [INFO][5517] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f5010eaf0a33a8152251b045b5333f6713c439348cd851235e54ac11163fc1d" Namespace="calico-apiserver" Pod="calico-apiserver-dd99d6fcb-klwdl" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-calico--apiserver--dd99d6fcb--klwdl-eth0" Jul 9 23:47:50.015959 containerd[1923]: 2025-07-09 23:47:49.992 [INFO][5517] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9f5010eaf0a33a8152251b045b5333f6713c439348cd851235e54ac11163fc1d" Namespace="calico-apiserver" Pod="calico-apiserver-dd99d6fcb-klwdl" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-calico--apiserver--dd99d6fcb--klwdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5de0cd73c3-k8s-calico--apiserver--dd99d6fcb--klwdl-eth0", GenerateName:"calico-apiserver-dd99d6fcb-", Namespace:"calico-apiserver", SelfLink:"", UID:"9cfb4053-28d0-4909-b1fd-e33040f413d8", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 47, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dd99d6fcb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5de0cd73c3", ContainerID:"9f5010eaf0a33a8152251b045b5333f6713c439348cd851235e54ac11163fc1d", Pod:"calico-apiserver-dd99d6fcb-klwdl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.27.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali57878d00caa", MAC:"4a:e7:64:ce:3f:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:47:50.015959 containerd[1923]: 2025-07-09 23:47:50.011 [INFO][5517] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9f5010eaf0a33a8152251b045b5333f6713c439348cd851235e54ac11163fc1d" Namespace="calico-apiserver" Pod="calico-apiserver-dd99d6fcb-klwdl" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-calico--apiserver--dd99d6fcb--klwdl-eth0" Jul 9 23:47:50.097191 containerd[1923]: time="2025-07-09T23:47:50.097145464Z" level=info msg="connecting to shim 9f5010eaf0a33a8152251b045b5333f6713c439348cd851235e54ac11163fc1d" address="unix:///run/containerd/s/7efdcdd244954ba5372d7219804a4db249e0cfd9a044355c473c560085917583" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:47:50.129434 systemd[1]: Started cri-containerd-9f5010eaf0a33a8152251b045b5333f6713c439348cd851235e54ac11163fc1d.scope - libcontainer container 9f5010eaf0a33a8152251b045b5333f6713c439348cd851235e54ac11163fc1d. Jul 9 23:47:50.156324 systemd-networkd[1490]: cali32fdb401af6: Gained IPv6LL Jul 9 23:47:50.385901 containerd[1923]: time="2025-07-09T23:47:50.385859258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dd99d6fcb-klwdl,Uid:9cfb4053-28d0-4909-b1fd-e33040f413d8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9f5010eaf0a33a8152251b045b5333f6713c439348cd851235e54ac11163fc1d\"" Jul 9 23:47:50.732305 systemd-networkd[1490]: cali60f3b5d0745: Gained IPv6LL Jul 9 23:47:50.884768 containerd[1923]: time="2025-07-09T23:47:50.884663173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fqhgt,Uid:34683483-f4f9-4403-b516-8041d5bb8797,Namespace:kube-system,Attempt:0,}" Jul 9 23:47:50.894737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1213021209.mount: Deactivated successfully. Jul 9 23:47:51.110914 systemd-networkd[1490]: calic8f15fc825f: Link UP Jul 9 23:47:51.111733 systemd-networkd[1490]: calic8f15fc825f: Gained carrier Jul 9 23:47:51.116260 systemd-networkd[1490]: cali02677d921c9: Gained IPv6LL Jul 9 23:47:51.116414 systemd-networkd[1490]: cali57878d00caa: Gained IPv6LL Jul 9 23:47:51.159374 containerd[1923]: 2025-07-09 23:47:50.967 [INFO][5599] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--n--5de0cd73c3-k8s-coredns--7c65d6cfc9--fqhgt-eth0 coredns-7c65d6cfc9- kube-system 34683483-f4f9-4403-b516-8041d5bb8797 820 0 2025-07-09 23:46:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.1.1-n-5de0cd73c3 coredns-7c65d6cfc9-fqhgt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic8f15fc825f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1d218e8f422f2eb89fb7f4c697dd11d939d3ead7513a8faade9163303684a1d3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fqhgt" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-coredns--7c65d6cfc9--fqhgt-" Jul 9 23:47:51.159374 containerd[1923]: 2025-07-09 23:47:50.968 [INFO][5599] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1d218e8f422f2eb89fb7f4c697dd11d939d3ead7513a8faade9163303684a1d3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fqhgt" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-coredns--7c65d6cfc9--fqhgt-eth0" Jul 9 23:47:51.159374 containerd[1923]: 2025-07-09 23:47:50.998 [INFO][5612] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d218e8f422f2eb89fb7f4c697dd11d939d3ead7513a8faade9163303684a1d3" HandleID="k8s-pod-network.1d218e8f422f2eb89fb7f4c697dd11d939d3ead7513a8faade9163303684a1d3" Workload="ci--4344.1.1--n--5de0cd73c3-k8s-coredns--7c65d6cfc9--fqhgt-eth0" Jul 9 23:47:51.159374 containerd[1923]: 2025-07-09 23:47:50.998 [INFO][5612] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1d218e8f422f2eb89fb7f4c697dd11d939d3ead7513a8faade9163303684a1d3" HandleID="k8s-pod-network.1d218e8f422f2eb89fb7f4c697dd11d939d3ead7513a8faade9163303684a1d3" Workload="ci--4344.1.1--n--5de0cd73c3-k8s-coredns--7c65d6cfc9--fqhgt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000254ff0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.1.1-n-5de0cd73c3", "pod":"coredns-7c65d6cfc9-fqhgt", "timestamp":"2025-07-09 23:47:50.998193252 +0000 UTC"}, Hostname:"ci-4344.1.1-n-5de0cd73c3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 23:47:51.159374 containerd[1923]: 2025-07-09 23:47:50.998 [INFO][5612] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 23:47:51.159374 containerd[1923]: 2025-07-09 23:47:50.998 [INFO][5612] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 23:47:51.159374 containerd[1923]: 2025-07-09 23:47:50.998 [INFO][5612] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-n-5de0cd73c3' Jul 9 23:47:51.159374 containerd[1923]: 2025-07-09 23:47:51.025 [INFO][5612] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1d218e8f422f2eb89fb7f4c697dd11d939d3ead7513a8faade9163303684a1d3" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:51.159374 containerd[1923]: 2025-07-09 23:47:51.038 [INFO][5612] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:51.159374 containerd[1923]: 2025-07-09 23:47:51.042 [INFO][5612] ipam/ipam.go 511: Trying affinity for 192.168.27.192/26 host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:51.159374 containerd[1923]: 2025-07-09 23:47:51.047 [INFO][5612] ipam/ipam.go 158: Attempting to load block cidr=192.168.27.192/26 host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:51.159374 containerd[1923]: 2025-07-09 23:47:51.050 [INFO][5612] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.27.192/26 host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:51.159374 containerd[1923]: 2025-07-09 23:47:51.050 [INFO][5612] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.27.192/26 handle="k8s-pod-network.1d218e8f422f2eb89fb7f4c697dd11d939d3ead7513a8faade9163303684a1d3" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:51.159374 containerd[1923]: 2025-07-09 23:47:51.059 [INFO][5612] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1d218e8f422f2eb89fb7f4c697dd11d939d3ead7513a8faade9163303684a1d3 Jul 9 23:47:51.159374 containerd[1923]: 2025-07-09 23:47:51.079 [INFO][5612] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.27.192/26 handle="k8s-pod-network.1d218e8f422f2eb89fb7f4c697dd11d939d3ead7513a8faade9163303684a1d3" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:51.159374 containerd[1923]: 2025-07-09 23:47:51.096 [INFO][5612] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.27.200/26] block=192.168.27.192/26 handle="k8s-pod-network.1d218e8f422f2eb89fb7f4c697dd11d939d3ead7513a8faade9163303684a1d3" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:51.159374 containerd[1923]: 2025-07-09 23:47:51.098 [INFO][5612] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.27.200/26] handle="k8s-pod-network.1d218e8f422f2eb89fb7f4c697dd11d939d3ead7513a8faade9163303684a1d3" host="ci-4344.1.1-n-5de0cd73c3" Jul 9 23:47:51.159374 containerd[1923]: 2025-07-09 23:47:51.098 [INFO][5612] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 23:47:51.159374 containerd[1923]: 2025-07-09 23:47:51.098 [INFO][5612] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.27.200/26] IPv6=[] ContainerID="1d218e8f422f2eb89fb7f4c697dd11d939d3ead7513a8faade9163303684a1d3" HandleID="k8s-pod-network.1d218e8f422f2eb89fb7f4c697dd11d939d3ead7513a8faade9163303684a1d3" Workload="ci--4344.1.1--n--5de0cd73c3-k8s-coredns--7c65d6cfc9--fqhgt-eth0" Jul 9 23:47:51.159957 containerd[1923]: 2025-07-09 23:47:51.103 [INFO][5599] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1d218e8f422f2eb89fb7f4c697dd11d939d3ead7513a8faade9163303684a1d3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fqhgt" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-coredns--7c65d6cfc9--fqhgt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5de0cd73c3-k8s-coredns--7c65d6cfc9--fqhgt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"34683483-f4f9-4403-b516-8041d5bb8797", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 46, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5de0cd73c3", ContainerID:"", Pod:"coredns-7c65d6cfc9-fqhgt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.27.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic8f15fc825f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:47:51.159957 containerd[1923]: 2025-07-09 23:47:51.104 [INFO][5599] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.27.200/32] ContainerID="1d218e8f422f2eb89fb7f4c697dd11d939d3ead7513a8faade9163303684a1d3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fqhgt" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-coredns--7c65d6cfc9--fqhgt-eth0" Jul 9 23:47:51.159957 containerd[1923]: 2025-07-09 23:47:51.104 [INFO][5599] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic8f15fc825f ContainerID="1d218e8f422f2eb89fb7f4c697dd11d939d3ead7513a8faade9163303684a1d3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fqhgt" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-coredns--7c65d6cfc9--fqhgt-eth0" Jul 9 23:47:51.159957 containerd[1923]: 2025-07-09 23:47:51.111 [INFO][5599] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d218e8f422f2eb89fb7f4c697dd11d939d3ead7513a8faade9163303684a1d3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fqhgt" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-coredns--7c65d6cfc9--fqhgt-eth0" Jul 9 23:47:51.159957 containerd[1923]: 2025-07-09 23:47:51.114 [INFO][5599] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1d218e8f422f2eb89fb7f4c697dd11d939d3ead7513a8faade9163303684a1d3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fqhgt" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-coredns--7c65d6cfc9--fqhgt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--n--5de0cd73c3-k8s-coredns--7c65d6cfc9--fqhgt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"34683483-f4f9-4403-b516-8041d5bb8797", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 23, 46, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-n-5de0cd73c3", ContainerID:"1d218e8f422f2eb89fb7f4c697dd11d939d3ead7513a8faade9163303684a1d3", Pod:"coredns-7c65d6cfc9-fqhgt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.27.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic8f15fc825f", MAC:"12:9d:8a:a3:18:18", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 23:47:51.159957 containerd[1923]: 2025-07-09 23:47:51.154 [INFO][5599] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1d218e8f422f2eb89fb7f4c697dd11d939d3ead7513a8faade9163303684a1d3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fqhgt" WorkloadEndpoint="ci--4344.1.1--n--5de0cd73c3-k8s-coredns--7c65d6cfc9--fqhgt-eth0" Jul 9 23:47:51.293272 containerd[1923]: time="2025-07-09T23:47:51.293195142Z" level=info msg="connecting to shim 1d218e8f422f2eb89fb7f4c697dd11d939d3ead7513a8faade9163303684a1d3" address="unix:///run/containerd/s/3266a15dfa01e7aaeddcd54654dbc9d086b1bcea22f7c3375b834ddde8293a34" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:47:51.329029 systemd[1]: Started cri-containerd-1d218e8f422f2eb89fb7f4c697dd11d939d3ead7513a8faade9163303684a1d3.scope - libcontainer container 1d218e8f422f2eb89fb7f4c697dd11d939d3ead7513a8faade9163303684a1d3. Jul 9 23:47:52.024838 containerd[1923]: time="2025-07-09T23:47:52.024789003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fqhgt,Uid:34683483-f4f9-4403-b516-8041d5bb8797,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d218e8f422f2eb89fb7f4c697dd11d939d3ead7513a8faade9163303684a1d3\"" Jul 9 23:47:52.028847 containerd[1923]: time="2025-07-09T23:47:52.028804791Z" level=info msg="CreateContainer within sandbox \"1d218e8f422f2eb89fb7f4c697dd11d939d3ead7513a8faade9163303684a1d3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 23:47:52.056207 containerd[1923]: time="2025-07-09T23:47:52.055916075Z" level=info msg="Container 116f4c97a04db3c2dbf7bcbf27d42f94f73445e3747446be66d44d940a5f5ef1: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:47:52.058694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount202221331.mount: Deactivated successfully. Jul 9 23:47:52.116752 containerd[1923]: time="2025-07-09T23:47:52.116705283Z" level=info msg="CreateContainer within sandbox \"1d218e8f422f2eb89fb7f4c697dd11d939d3ead7513a8faade9163303684a1d3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"116f4c97a04db3c2dbf7bcbf27d42f94f73445e3747446be66d44d940a5f5ef1\"" Jul 9 23:47:52.118239 containerd[1923]: time="2025-07-09T23:47:52.117923723Z" level=info msg="StartContainer for \"116f4c97a04db3c2dbf7bcbf27d42f94f73445e3747446be66d44d940a5f5ef1\"" Jul 9 23:47:52.119024 containerd[1923]: time="2025-07-09T23:47:52.119000375Z" level=info msg="connecting to shim 116f4c97a04db3c2dbf7bcbf27d42f94f73445e3747446be66d44d940a5f5ef1" address="unix:///run/containerd/s/3266a15dfa01e7aaeddcd54654dbc9d086b1bcea22f7c3375b834ddde8293a34" protocol=ttrpc version=3 Jul 9 23:47:52.135727 containerd[1923]: time="2025-07-09T23:47:52.135684756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:52.138652 containerd[1923]: time="2025-07-09T23:47:52.138611652Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 9 23:47:52.144164 containerd[1923]: time="2025-07-09T23:47:52.144122625Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:52.144337 systemd[1]: Started cri-containerd-116f4c97a04db3c2dbf7bcbf27d42f94f73445e3747446be66d44d940a5f5ef1.scope - libcontainer container 116f4c97a04db3c2dbf7bcbf27d42f94f73445e3747446be66d44d940a5f5ef1. Jul 9 23:47:52.153254 containerd[1923]: time="2025-07-09T23:47:52.153219292Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 3.699895886s" Jul 9 23:47:52.153374 containerd[1923]: time="2025-07-09T23:47:52.153361105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 9 23:47:52.153535 containerd[1923]: time="2025-07-09T23:47:52.153521454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:52.155343 containerd[1923]: time="2025-07-09T23:47:52.155316393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 9 23:47:52.158047 containerd[1923]: time="2025-07-09T23:47:52.158011106Z" level=info msg="CreateContainer within sandbox \"c0b8dcf1eabd42214b04dc71663a27ed7c1898a793ace0abf8a78a74bccd90b4\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 9 23:47:52.181074 containerd[1923]: time="2025-07-09T23:47:52.181042760Z" level=info msg="StartContainer for \"116f4c97a04db3c2dbf7bcbf27d42f94f73445e3747446be66d44d940a5f5ef1\" returns successfully" Jul 9 23:47:52.194429 containerd[1923]: time="2025-07-09T23:47:52.193894751Z" level=info msg="Container bf01f1ffdd184bcb113f81e490159b97f048108274db6a34da70f14026c51cc3: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:47:52.217719 containerd[1923]: time="2025-07-09T23:47:52.217669213Z" level=info msg="CreateContainer within sandbox \"c0b8dcf1eabd42214b04dc71663a27ed7c1898a793ace0abf8a78a74bccd90b4\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"bf01f1ffdd184bcb113f81e490159b97f048108274db6a34da70f14026c51cc3\"" Jul 9 23:47:52.218828 containerd[1923]: time="2025-07-09T23:47:52.218742848Z" level=info msg="StartContainer for \"bf01f1ffdd184bcb113f81e490159b97f048108274db6a34da70f14026c51cc3\"" Jul 9 23:47:52.220755 containerd[1923]: time="2025-07-09T23:47:52.220728578Z" level=info msg="connecting to shim bf01f1ffdd184bcb113f81e490159b97f048108274db6a34da70f14026c51cc3" address="unix:///run/containerd/s/714e6762cf61c86ebfa3b47d78c75b904bd4fd0861bfabaa8c32f1c34036139d" protocol=ttrpc version=3 Jul 9 23:47:52.242319 systemd[1]: Started cri-containerd-bf01f1ffdd184bcb113f81e490159b97f048108274db6a34da70f14026c51cc3.scope - libcontainer container bf01f1ffdd184bcb113f81e490159b97f048108274db6a34da70f14026c51cc3. Jul 9 23:47:52.278071 containerd[1923]: time="2025-07-09T23:47:52.277953644Z" level=info msg="StartContainer for \"bf01f1ffdd184bcb113f81e490159b97f048108274db6a34da70f14026c51cc3\" returns successfully" Jul 9 23:47:53.036354 systemd-networkd[1490]: calic8f15fc825f: Gained IPv6LL Jul 9 23:47:53.084044 kubelet[3395]: I0709 23:47:53.083928 3395 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-fqhgt" podStartSLOduration=55.083886695 podStartE2EDuration="55.083886695s" podCreationTimestamp="2025-07-09 23:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:47:53.083078245 +0000 UTC m=+61.260135215" watchObservedRunningTime="2025-07-09 23:47:53.083886695 +0000 UTC m=+61.260943625" Jul 9 23:47:53.100313 kubelet[3395]: I0709 23:47:53.099873 3395 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-cpv7t" podStartSLOduration=39.39759252 podStartE2EDuration="43.099858693s" podCreationTimestamp="2025-07-09 23:47:10 +0000 UTC" firstStartedPulling="2025-07-09 23:47:48.452628039 +0000 UTC m=+56.629684953" lastFinishedPulling="2025-07-09 23:47:52.154894196 +0000 UTC m=+60.331951126" observedRunningTime="2025-07-09 23:47:53.099621941 +0000 UTC m=+61.276678855" watchObservedRunningTime="2025-07-09 23:47:53.099858693 +0000 UTC m=+61.276915607" Jul 9 23:47:53.143544 containerd[1923]: time="2025-07-09T23:47:53.143507457Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf01f1ffdd184bcb113f81e490159b97f048108274db6a34da70f14026c51cc3\" id:\"80358184ef9613db4c8760da97cd9840adedaa4e5144a86d338588e6c9a427a0\" pid:5758 exit_status:1 exited_at:{seconds:1752104873 nanos:142982088}" Jul 9 23:47:54.249207 containerd[1923]: time="2025-07-09T23:47:54.249146888Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf01f1ffdd184bcb113f81e490159b97f048108274db6a34da70f14026c51cc3\" id:\"fe9259d776f8722cf656f0a3db8988f44e6e3ae2812076a932fc2008da19fc9c\" pid:5790 exit_status:1 exited_at:{seconds:1752104874 nanos:248855767}" Jul 9 23:47:54.400041 containerd[1923]: time="2025-07-09T23:47:54.399927161Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:54.405975 containerd[1923]: time="2025-07-09T23:47:54.405941527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 9 23:47:54.411491 containerd[1923]: time="2025-07-09T23:47:54.411448924Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:54.419411 containerd[1923]: time="2025-07-09T23:47:54.419348648Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:54.420704 containerd[1923]: time="2025-07-09T23:47:54.420588929Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 2.26522371s" Jul 9 23:47:54.420704 containerd[1923]: time="2025-07-09T23:47:54.420618746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 9 23:47:54.422045 containerd[1923]: time="2025-07-09T23:47:54.421881843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 9 23:47:54.423205 containerd[1923]: time="2025-07-09T23:47:54.422835067Z" level=info msg="CreateContainer within sandbox \"51b9922ad3a02da0861d3e1f67ae4bc9d86ad99fe628938cf0074bfe471eb3aa\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 9 23:47:54.465073 containerd[1923]: time="2025-07-09T23:47:54.465033935Z" level=info msg="Container 9d322b458eedcdac11b4ac2db476f4e1592820de79054df2707e46c33352c16b: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:47:54.492966 containerd[1923]: time="2025-07-09T23:47:54.492932069Z" level=info msg="CreateContainer within sandbox \"51b9922ad3a02da0861d3e1f67ae4bc9d86ad99fe628938cf0074bfe471eb3aa\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9d322b458eedcdac11b4ac2db476f4e1592820de79054df2707e46c33352c16b\"" Jul 9 23:47:54.493687 containerd[1923]: time="2025-07-09T23:47:54.493643244Z" level=info msg="StartContainer for \"9d322b458eedcdac11b4ac2db476f4e1592820de79054df2707e46c33352c16b\"" Jul 9 23:47:54.494608 containerd[1923]: time="2025-07-09T23:47:54.494555818Z" level=info msg="connecting to shim 9d322b458eedcdac11b4ac2db476f4e1592820de79054df2707e46c33352c16b" address="unix:///run/containerd/s/61a60675a30ea7f119cf002ba086ec393122bc011402ac5ca08c7c8f56d4eb3e" protocol=ttrpc version=3 Jul 9 23:47:54.522373 systemd[1]: Started cri-containerd-9d322b458eedcdac11b4ac2db476f4e1592820de79054df2707e46c33352c16b.scope - libcontainer container 9d322b458eedcdac11b4ac2db476f4e1592820de79054df2707e46c33352c16b. Jul 9 23:47:54.605224 containerd[1923]: time="2025-07-09T23:47:54.604768492Z" level=info msg="StartContainer for \"9d322b458eedcdac11b4ac2db476f4e1592820de79054df2707e46c33352c16b\" returns successfully" Jul 9 23:47:55.107154 kubelet[3395]: I0709 23:47:55.107012 3395 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-dd99d6fcb-jl4z7" podStartSLOduration=42.251842486 podStartE2EDuration="48.106995647s" podCreationTimestamp="2025-07-09 23:47:07 +0000 UTC" firstStartedPulling="2025-07-09 23:47:48.566613718 +0000 UTC m=+56.743670632" lastFinishedPulling="2025-07-09 23:47:54.421766871 +0000 UTC m=+62.598823793" observedRunningTime="2025-07-09 23:47:55.094959379 +0000 UTC m=+63.272016293" watchObservedRunningTime="2025-07-09 23:47:55.106995647 +0000 UTC m=+63.284052561" Jul 9 23:47:56.081045 kubelet[3395]: I0709 23:47:56.080879 3395 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 23:47:56.646548 containerd[1923]: time="2025-07-09T23:47:56.646506938Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:56.655187 containerd[1923]: time="2025-07-09T23:47:56.655136380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 9 23:47:56.667484 containerd[1923]: time="2025-07-09T23:47:56.667342483Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:56.674569 containerd[1923]: time="2025-07-09T23:47:56.674519848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:56.674860 containerd[1923]: time="2025-07-09T23:47:56.674792872Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 2.252885772s" Jul 9 23:47:56.674860 containerd[1923]: time="2025-07-09T23:47:56.674829330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 9 23:47:56.677574 containerd[1923]: time="2025-07-09T23:47:56.677352519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 9 23:47:56.688728 containerd[1923]: time="2025-07-09T23:47:56.688704820Z" level=info msg="CreateContainer within sandbox \"4e0b8fecfad4589d9ac7a2469368670de1b87e359b3f1f7cec8c4fbc16c3c61e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 9 23:47:56.729349 containerd[1923]: time="2025-07-09T23:47:56.729285404Z" level=info msg="Container 6093d98c1d27b24283f7302be0eac3a9ef9cddaad9b310f71005e5d9f66cc2bb: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:47:56.772078 containerd[1923]: time="2025-07-09T23:47:56.772035271Z" level=info msg="CreateContainer within sandbox \"4e0b8fecfad4589d9ac7a2469368670de1b87e359b3f1f7cec8c4fbc16c3c61e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6093d98c1d27b24283f7302be0eac3a9ef9cddaad9b310f71005e5d9f66cc2bb\"" Jul 9 23:47:56.772904 containerd[1923]: time="2025-07-09T23:47:56.772657890Z" level=info msg="StartContainer for \"6093d98c1d27b24283f7302be0eac3a9ef9cddaad9b310f71005e5d9f66cc2bb\"" Jul 9 23:47:56.773739 containerd[1923]: time="2025-07-09T23:47:56.773718523Z" level=info msg="connecting to shim 6093d98c1d27b24283f7302be0eac3a9ef9cddaad9b310f71005e5d9f66cc2bb" address="unix:///run/containerd/s/36762f1f1458ba52f066c401e03ce1b48a35e97ec3d75c4f3a1b770faa01af5e" protocol=ttrpc version=3 Jul 9 23:47:56.819302 systemd[1]: Started cri-containerd-6093d98c1d27b24283f7302be0eac3a9ef9cddaad9b310f71005e5d9f66cc2bb.scope - libcontainer container 6093d98c1d27b24283f7302be0eac3a9ef9cddaad9b310f71005e5d9f66cc2bb. Jul 9 23:47:56.857425 containerd[1923]: time="2025-07-09T23:47:56.857393880Z" level=info msg="StartContainer for \"6093d98c1d27b24283f7302be0eac3a9ef9cddaad9b310f71005e5d9f66cc2bb\" returns successfully" Jul 9 23:47:57.102268 kubelet[3395]: I0709 23:47:57.102221 3395 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6ff74f4fb8-w9w6s" podStartSLOduration=38.626925688 podStartE2EDuration="46.102202481s" podCreationTimestamp="2025-07-09 23:47:11 +0000 UTC" firstStartedPulling="2025-07-09 23:47:49.201005797 +0000 UTC m=+57.378062711" lastFinishedPulling="2025-07-09 23:47:56.67628259 +0000 UTC m=+64.853339504" observedRunningTime="2025-07-09 23:47:57.099713173 +0000 UTC m=+65.276770095" watchObservedRunningTime="2025-07-09 23:47:57.102202481 +0000 UTC m=+65.279259395" Jul 9 23:47:57.112922 containerd[1923]: time="2025-07-09T23:47:57.112893498Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6093d98c1d27b24283f7302be0eac3a9ef9cddaad9b310f71005e5d9f66cc2bb\" id:\"ead817d2175656eb317319f304f1ecdccc83bc9f4c3a67b2626a5ef0c281c53c\" pid:5905 exited_at:{seconds:1752104877 nanos:112528151}" Jul 9 23:47:58.640602 containerd[1923]: time="2025-07-09T23:47:58.640556793Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:58.644837 containerd[1923]: time="2025-07-09T23:47:58.644811484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 9 23:47:58.651184 containerd[1923]: time="2025-07-09T23:47:58.651157103Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:58.660847 containerd[1923]: time="2025-07-09T23:47:58.660819592Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:58.661366 containerd[1923]: time="2025-07-09T23:47:58.661162491Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.983462449s" Jul 9 23:47:58.661366 containerd[1923]: time="2025-07-09T23:47:58.661212940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 9 23:47:58.662229 containerd[1923]: time="2025-07-09T23:47:58.662210355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 9 23:47:58.663801 containerd[1923]: time="2025-07-09T23:47:58.663782091Z" level=info msg="CreateContainer within sandbox \"1289599eb1b1d2b235bd95c6b773c7e9fc63c25250a8fa0bd6988ea655c76f60\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 9 23:47:58.706302 containerd[1923]: time="2025-07-09T23:47:58.706270958Z" level=info msg="Container 00964a548c410847eec9b1ce496e2497557e6621dece190a3ac9a1687410315e: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:47:58.708337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount35837992.mount: Deactivated successfully. Jul 9 23:47:58.736282 containerd[1923]: time="2025-07-09T23:47:58.736251416Z" level=info msg="CreateContainer within sandbox \"1289599eb1b1d2b235bd95c6b773c7e9fc63c25250a8fa0bd6988ea655c76f60\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"00964a548c410847eec9b1ce496e2497557e6621dece190a3ac9a1687410315e\"" Jul 9 23:47:58.736772 containerd[1923]: time="2025-07-09T23:47:58.736734071Z" level=info msg="StartContainer for \"00964a548c410847eec9b1ce496e2497557e6621dece190a3ac9a1687410315e\"" Jul 9 23:47:58.737849 containerd[1923]: time="2025-07-09T23:47:58.737823593Z" level=info msg="connecting to shim 00964a548c410847eec9b1ce496e2497557e6621dece190a3ac9a1687410315e" address="unix:///run/containerd/s/78442c0f3dedf8323c4be19acc51aa0d56173949b35819485c0050f1ac04bc65" protocol=ttrpc version=3 Jul 9 23:47:58.757299 systemd[1]: Started cri-containerd-00964a548c410847eec9b1ce496e2497557e6621dece190a3ac9a1687410315e.scope - libcontainer container 00964a548c410847eec9b1ce496e2497557e6621dece190a3ac9a1687410315e. Jul 9 23:47:58.797035 containerd[1923]: time="2025-07-09T23:47:58.796977948Z" level=info msg="StartContainer for \"00964a548c410847eec9b1ce496e2497557e6621dece190a3ac9a1687410315e\" returns successfully" Jul 9 23:47:59.304578 containerd[1923]: time="2025-07-09T23:47:59.304503269Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:59.308193 containerd[1923]: time="2025-07-09T23:47:59.307979256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 9 23:47:59.309285 containerd[1923]: time="2025-07-09T23:47:59.309244350Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 646.958033ms" Jul 9 23:47:59.309739 containerd[1923]: time="2025-07-09T23:47:59.309715525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 9 23:47:59.311647 containerd[1923]: time="2025-07-09T23:47:59.311531021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 9 23:47:59.313359 containerd[1923]: time="2025-07-09T23:47:59.312813860Z" level=info msg="CreateContainer within sandbox \"9f5010eaf0a33a8152251b045b5333f6713c439348cd851235e54ac11163fc1d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 9 23:47:59.388631 containerd[1923]: time="2025-07-09T23:47:59.388604383Z" level=info msg="Container 72bff21221bbe0421c63dbc979147760bf03b2ec79e266f6bbf7387eecb2c459: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:47:59.412810 containerd[1923]: time="2025-07-09T23:47:59.412776159Z" level=info msg="CreateContainer within sandbox \"9f5010eaf0a33a8152251b045b5333f6713c439348cd851235e54ac11163fc1d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"72bff21221bbe0421c63dbc979147760bf03b2ec79e266f6bbf7387eecb2c459\"" Jul 9 23:47:59.413651 containerd[1923]: time="2025-07-09T23:47:59.413548270Z" level=info msg="StartContainer for \"72bff21221bbe0421c63dbc979147760bf03b2ec79e266f6bbf7387eecb2c459\"" Jul 9 23:47:59.418133 containerd[1923]: time="2025-07-09T23:47:59.418104778Z" level=info msg="connecting to shim 72bff21221bbe0421c63dbc979147760bf03b2ec79e266f6bbf7387eecb2c459" address="unix:///run/containerd/s/7efdcdd244954ba5372d7219804a4db249e0cfd9a044355c473c560085917583" protocol=ttrpc version=3 Jul 9 23:47:59.435296 systemd[1]: Started cri-containerd-72bff21221bbe0421c63dbc979147760bf03b2ec79e266f6bbf7387eecb2c459.scope - libcontainer container 72bff21221bbe0421c63dbc979147760bf03b2ec79e266f6bbf7387eecb2c459. Jul 9 23:47:59.474118 containerd[1923]: time="2025-07-09T23:47:59.474086332Z" level=info msg="StartContainer for \"72bff21221bbe0421c63dbc979147760bf03b2ec79e266f6bbf7387eecb2c459\" returns successfully" Jul 9 23:48:01.027326 containerd[1923]: time="2025-07-09T23:48:01.027055053Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:01.032782 containerd[1923]: time="2025-07-09T23:48:01.032746268Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 9 23:48:01.038246 containerd[1923]: time="2025-07-09T23:48:01.038201524Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:01.046911 containerd[1923]: time="2025-07-09T23:48:01.046882911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:48:01.047857 containerd[1923]: time="2025-07-09T23:48:01.047836477Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.736281951s" Jul 9 23:48:01.048076 containerd[1923]: time="2025-07-09T23:48:01.048061115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 9 23:48:01.051214 containerd[1923]: time="2025-07-09T23:48:01.051187140Z" level=info msg="CreateContainer within sandbox \"1289599eb1b1d2b235bd95c6b773c7e9fc63c25250a8fa0bd6988ea655c76f60\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 9 23:48:01.093039 containerd[1923]: time="2025-07-09T23:48:01.090994732Z" level=info msg="Container a6e1ee4f96434611c09cf4f63d1255afb5bdbb1c885cf428c7491b736ecd5992: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:48:01.133801 containerd[1923]: time="2025-07-09T23:48:01.133699125Z" level=info msg="CreateContainer within sandbox \"1289599eb1b1d2b235bd95c6b773c7e9fc63c25250a8fa0bd6988ea655c76f60\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a6e1ee4f96434611c09cf4f63d1255afb5bdbb1c885cf428c7491b736ecd5992\"" Jul 9 23:48:01.135212 containerd[1923]: time="2025-07-09T23:48:01.135171466Z" level=info msg="StartContainer for \"a6e1ee4f96434611c09cf4f63d1255afb5bdbb1c885cf428c7491b736ecd5992\"" Jul 9 23:48:01.136313 containerd[1923]: time="2025-07-09T23:48:01.136285997Z" level=info msg="connecting to shim a6e1ee4f96434611c09cf4f63d1255afb5bdbb1c885cf428c7491b736ecd5992" address="unix:///run/containerd/s/78442c0f3dedf8323c4be19acc51aa0d56173949b35819485c0050f1ac04bc65" protocol=ttrpc version=3 Jul 9 23:48:01.154409 systemd[1]: Started cri-containerd-a6e1ee4f96434611c09cf4f63d1255afb5bdbb1c885cf428c7491b736ecd5992.scope - libcontainer container a6e1ee4f96434611c09cf4f63d1255afb5bdbb1c885cf428c7491b736ecd5992. Jul 9 23:48:01.201840 containerd[1923]: time="2025-07-09T23:48:01.201767731Z" level=info msg="StartContainer for \"a6e1ee4f96434611c09cf4f63d1255afb5bdbb1c885cf428c7491b736ecd5992\" returns successfully" Jul 9 23:48:01.551796 kubelet[3395]: I0709 23:48:01.551727 3395 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-dd99d6fcb-klwdl" podStartSLOduration=45.627859301 podStartE2EDuration="54.551712269s" podCreationTimestamp="2025-07-09 23:47:07 +0000 UTC" firstStartedPulling="2025-07-09 23:47:50.387217735 +0000 UTC m=+58.564274657" lastFinishedPulling="2025-07-09 23:47:59.311070711 +0000 UTC m=+67.488127625" observedRunningTime="2025-07-09 23:48:00.113702675 +0000 UTC m=+68.290759637" watchObservedRunningTime="2025-07-09 23:48:01.551712269 +0000 UTC m=+69.728769183" Jul 9 23:48:01.972284 kubelet[3395]: I0709 23:48:01.971552 3395 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 9 23:48:01.974346 kubelet[3395]: I0709 23:48:01.974328 3395 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 9 23:48:02.121710 kubelet[3395]: I0709 23:48:02.121664 3395 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-vn5rz" podStartSLOduration=39.401460886 podStartE2EDuration="51.121650117s" podCreationTimestamp="2025-07-09 23:47:11 +0000 UTC" firstStartedPulling="2025-07-09 23:47:49.328692934 +0000 UTC m=+57.505749848" lastFinishedPulling="2025-07-09 23:48:01.048882157 +0000 UTC m=+69.225939079" observedRunningTime="2025-07-09 23:48:02.120442528 +0000 UTC m=+70.297499442" watchObservedRunningTime="2025-07-09 23:48:02.121650117 +0000 UTC m=+70.298707039" Jul 9 23:48:02.368533 kubelet[3395]: I0709 23:48:02.368474 3395 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 23:48:03.339244 containerd[1923]: time="2025-07-09T23:48:03.339184949Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf01f1ffdd184bcb113f81e490159b97f048108274db6a34da70f14026c51cc3\" id:\"25e92ce1c248b3496a5d4e935511621bf993986b51c05be61e2417a55724d140\" pid:6055 exited_at:{seconds:1752104883 nanos:338726798}" Jul 9 23:48:08.400803 containerd[1923]: time="2025-07-09T23:48:08.400689046Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6093d98c1d27b24283f7302be0eac3a9ef9cddaad9b310f71005e5d9f66cc2bb\" id:\"032899240d1b1fdf1ded55e7ab7caae47d75537a9b44df77ec2f6bb10c29ffbd\" pid:6079 exited_at:{seconds:1752104888 nanos:400515873}" Jul 9 23:48:10.011808 containerd[1923]: time="2025-07-09T23:48:10.011766368Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b54f9d314e6f1c2bd3a3710160e0c3c20704906f83bd2d9ca1eeca664893d466\" id:\"fa47e7548890d698171dfe58936159422f6ede322667c6a4e751f1eef74e048c\" pid:6101 exited_at:{seconds:1752104890 nanos:11572458}" Jul 9 23:48:14.187797 containerd[1923]: time="2025-07-09T23:48:14.187731738Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6093d98c1d27b24283f7302be0eac3a9ef9cddaad9b310f71005e5d9f66cc2bb\" id:\"709006d65fbafd013c5bd1356f33004f6701015639bb12af52cfcff450f74a32\" pid:6128 exited_at:{seconds:1752104894 nanos:187524772}" Jul 9 23:48:33.338709 containerd[1923]: time="2025-07-09T23:48:33.338666135Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf01f1ffdd184bcb113f81e490159b97f048108274db6a34da70f14026c51cc3\" id:\"782b15f4385cd46dd641f5246aa5a0781f61b6e0864d4eb3581b54738f6d47dc\" pid:6163 exited_at:{seconds:1752104913 nanos:338450792}" Jul 9 23:48:38.419408 containerd[1923]: time="2025-07-09T23:48:38.419346952Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6093d98c1d27b24283f7302be0eac3a9ef9cddaad9b310f71005e5d9f66cc2bb\" id:\"498c63ddf0cc8a65310bd3317f127b78e97e2cb19f41951b0145a44767721689\" pid:6187 exited_at:{seconds:1752104918 nanos:419126857}" Jul 9 23:48:38.919494 systemd[1]: Started sshd@7-10.200.20.11:22-10.200.16.10:35432.service - OpenSSH per-connection server daemon (10.200.16.10:35432). Jul 9 23:48:39.389220 sshd[6200]: Accepted publickey for core from 10.200.16.10 port 35432 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:39.390842 sshd-session[6200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:39.395317 systemd-logind[1904]: New session 10 of user core. Jul 9 23:48:39.399395 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 9 23:48:39.792537 sshd[6202]: Connection closed by 10.200.16.10 port 35432 Jul 9 23:48:39.793787 sshd-session[6200]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:39.797743 systemd[1]: sshd@7-10.200.20.11:22-10.200.16.10:35432.service: Deactivated successfully. Jul 9 23:48:39.798487 systemd-logind[1904]: Session 10 logged out. Waiting for processes to exit. Jul 9 23:48:39.800423 systemd[1]: session-10.scope: Deactivated successfully. Jul 9 23:48:39.803051 systemd-logind[1904]: Removed session 10. Jul 9 23:48:39.998673 containerd[1923]: time="2025-07-09T23:48:39.998633888Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b54f9d314e6f1c2bd3a3710160e0c3c20704906f83bd2d9ca1eeca664893d466\" id:\"3a6425974bb42ba776b7d23cc91d44e10b654c4564c6228897facda503d5a5bc\" pid:6225 exited_at:{seconds:1752104919 nanos:998442850}" Jul 9 23:48:44.888635 systemd[1]: Started sshd@8-10.200.20.11:22-10.200.16.10:58744.service - OpenSSH per-connection server daemon (10.200.16.10:58744). Jul 9 23:48:45.381578 sshd[6237]: Accepted publickey for core from 10.200.16.10 port 58744 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:45.383068 sshd-session[6237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:45.387212 systemd-logind[1904]: New session 11 of user core. Jul 9 23:48:45.393282 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 9 23:48:45.800603 sshd[6239]: Connection closed by 10.200.16.10 port 58744 Jul 9 23:48:45.800891 sshd-session[6237]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:45.804361 systemd-logind[1904]: Session 11 logged out. Waiting for processes to exit. Jul 9 23:48:45.805010 systemd[1]: sshd@8-10.200.20.11:22-10.200.16.10:58744.service: Deactivated successfully. Jul 9 23:48:45.806565 systemd[1]: session-11.scope: Deactivated successfully. Jul 9 23:48:45.809712 systemd-logind[1904]: Removed session 11. Jul 9 23:48:50.117352 containerd[1923]: time="2025-07-09T23:48:50.117314947Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf01f1ffdd184bcb113f81e490159b97f048108274db6a34da70f14026c51cc3\" id:\"4955ce8ef63753f102377f14efc99bf98c7fa8e45fc785b37c94253319a4d864\" pid:6263 exited_at:{seconds:1752104930 nanos:117062202}" Jul 9 23:48:50.884362 systemd[1]: Started sshd@9-10.200.20.11:22-10.200.16.10:58488.service - OpenSSH per-connection server daemon (10.200.16.10:58488). Jul 9 23:48:51.314291 sshd[6274]: Accepted publickey for core from 10.200.16.10 port 58488 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:51.315270 sshd-session[6274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:51.318495 systemd-logind[1904]: New session 12 of user core. Jul 9 23:48:51.322273 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 9 23:48:51.685278 sshd[6276]: Connection closed by 10.200.16.10 port 58488 Jul 9 23:48:51.685716 sshd-session[6274]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:51.687937 systemd[1]: sshd@9-10.200.20.11:22-10.200.16.10:58488.service: Deactivated successfully. Jul 9 23:48:51.689334 systemd[1]: session-12.scope: Deactivated successfully. Jul 9 23:48:51.690763 systemd-logind[1904]: Session 12 logged out. Waiting for processes to exit. Jul 9 23:48:51.692735 systemd-logind[1904]: Removed session 12. Jul 9 23:48:51.776565 systemd[1]: Started sshd@10-10.200.20.11:22-10.200.16.10:58492.service - OpenSSH per-connection server daemon (10.200.16.10:58492). Jul 9 23:48:52.247736 sshd[6288]: Accepted publickey for core from 10.200.16.10 port 58492 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:52.248847 sshd-session[6288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:52.252399 systemd-logind[1904]: New session 13 of user core. Jul 9 23:48:52.257285 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 9 23:48:52.661087 sshd[6292]: Connection closed by 10.200.16.10 port 58492 Jul 9 23:48:52.660936 sshd-session[6288]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:52.663857 systemd-logind[1904]: Session 13 logged out. Waiting for processes to exit. Jul 9 23:48:52.665051 systemd[1]: sshd@10-10.200.20.11:22-10.200.16.10:58492.service: Deactivated successfully. Jul 9 23:48:52.666988 systemd[1]: session-13.scope: Deactivated successfully. Jul 9 23:48:52.669085 systemd-logind[1904]: Removed session 13. Jul 9 23:48:52.753358 systemd[1]: Started sshd@11-10.200.20.11:22-10.200.16.10:58494.service - OpenSSH per-connection server daemon (10.200.16.10:58494). Jul 9 23:48:53.242798 sshd[6302]: Accepted publickey for core from 10.200.16.10 port 58494 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:53.243852 sshd-session[6302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:53.247603 systemd-logind[1904]: New session 14 of user core. Jul 9 23:48:53.255415 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 9 23:48:53.647456 sshd[6304]: Connection closed by 10.200.16.10 port 58494 Jul 9 23:48:53.647305 sshd-session[6302]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:53.650010 systemd-logind[1904]: Session 14 logged out. Waiting for processes to exit. Jul 9 23:48:53.650243 systemd[1]: sshd@11-10.200.20.11:22-10.200.16.10:58494.service: Deactivated successfully. Jul 9 23:48:53.651826 systemd[1]: session-14.scope: Deactivated successfully. Jul 9 23:48:53.653721 systemd-logind[1904]: Removed session 14. Jul 9 23:48:58.737852 systemd[1]: Started sshd@12-10.200.20.11:22-10.200.16.10:58502.service - OpenSSH per-connection server daemon (10.200.16.10:58502). Jul 9 23:48:59.215749 sshd[6328]: Accepted publickey for core from 10.200.16.10 port 58502 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:48:59.217121 sshd-session[6328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:59.220998 systemd-logind[1904]: New session 15 of user core. Jul 9 23:48:59.226304 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 9 23:48:59.603968 sshd[6330]: Connection closed by 10.200.16.10 port 58502 Jul 9 23:48:59.604451 sshd-session[6328]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:59.607430 systemd-logind[1904]: Session 15 logged out. Waiting for processes to exit. Jul 9 23:48:59.607681 systemd[1]: sshd@12-10.200.20.11:22-10.200.16.10:58502.service: Deactivated successfully. Jul 9 23:48:59.609105 systemd[1]: session-15.scope: Deactivated successfully. Jul 9 23:48:59.611005 systemd-logind[1904]: Removed session 15. Jul 9 23:49:03.345090 containerd[1923]: time="2025-07-09T23:49:03.345050354Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf01f1ffdd184bcb113f81e490159b97f048108274db6a34da70f14026c51cc3\" id:\"9244986397bff29ae9350f1865ae309fa3409a680baefef8c37d7cc91bc1f7a9\" pid:6354 exited_at:{seconds:1752104943 nanos:344819170}" Jul 9 23:49:04.696285 systemd[1]: Started sshd@13-10.200.20.11:22-10.200.16.10:45046.service - OpenSSH per-connection server daemon (10.200.16.10:45046). Jul 9 23:49:05.187929 sshd[6365]: Accepted publickey for core from 10.200.16.10 port 45046 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:49:05.188968 sshd-session[6365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:05.192936 systemd-logind[1904]: New session 16 of user core. Jul 9 23:49:05.198301 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 9 23:49:05.590789 sshd[6367]: Connection closed by 10.200.16.10 port 45046 Jul 9 23:49:05.591290 sshd-session[6365]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:05.593999 systemd[1]: sshd@13-10.200.20.11:22-10.200.16.10:45046.service: Deactivated successfully. Jul 9 23:49:05.595745 systemd[1]: session-16.scope: Deactivated successfully. Jul 9 23:49:05.597063 systemd-logind[1904]: Session 16 logged out. Waiting for processes to exit. Jul 9 23:49:05.598149 systemd-logind[1904]: Removed session 16. Jul 9 23:49:08.397478 containerd[1923]: time="2025-07-09T23:49:08.397436801Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6093d98c1d27b24283f7302be0eac3a9ef9cddaad9b310f71005e5d9f66cc2bb\" id:\"c406e976eee4727e13d3bac9ada385e472782e145ec8a3920ab51ccd48c639b1\" pid:6389 exited_at:{seconds:1752104948 nanos:396666696}" Jul 9 23:49:10.021641 containerd[1923]: time="2025-07-09T23:49:10.021517276Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b54f9d314e6f1c2bd3a3710160e0c3c20704906f83bd2d9ca1eeca664893d466\" id:\"faeff5e586c8bd0b3ee8fcdff69ec04cf8379428e2c33e15bdde100db9bb59f6\" pid:6425 exited_at:{seconds:1752104950 nanos:21145719}" Jul 9 23:49:10.683858 systemd[1]: Started sshd@14-10.200.20.11:22-10.200.16.10:42076.service - OpenSSH per-connection server daemon (10.200.16.10:42076). Jul 9 23:49:11.179200 sshd[6438]: Accepted publickey for core from 10.200.16.10 port 42076 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:49:11.180328 sshd-session[6438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:11.187147 systemd-logind[1904]: New session 17 of user core. Jul 9 23:49:11.191318 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 9 23:49:11.630492 sshd[6440]: Connection closed by 10.200.16.10 port 42076 Jul 9 23:49:11.631255 sshd-session[6438]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:11.635752 systemd[1]: sshd@14-10.200.20.11:22-10.200.16.10:42076.service: Deactivated successfully. Jul 9 23:49:11.637716 systemd[1]: session-17.scope: Deactivated successfully. Jul 9 23:49:11.638716 systemd-logind[1904]: Session 17 logged out. Waiting for processes to exit. Jul 9 23:49:11.640138 systemd-logind[1904]: Removed session 17. Jul 9 23:49:14.190984 containerd[1923]: time="2025-07-09T23:49:14.190936416Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6093d98c1d27b24283f7302be0eac3a9ef9cddaad9b310f71005e5d9f66cc2bb\" id:\"57bb4b6f500c1788ad17411c98fef782bbd77ac1871a04c824ab34275840c8d7\" pid:6465 exited_at:{seconds:1752104954 nanos:190620245}" Jul 9 23:49:16.718721 systemd[1]: Started sshd@15-10.200.20.11:22-10.200.16.10:42078.service - OpenSSH per-connection server daemon (10.200.16.10:42078). Jul 9 23:49:17.210484 sshd[6482]: Accepted publickey for core from 10.200.16.10 port 42078 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:49:17.211524 sshd-session[6482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:17.215066 systemd-logind[1904]: New session 18 of user core. Jul 9 23:49:17.223459 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 9 23:49:17.613620 sshd[6484]: Connection closed by 10.200.16.10 port 42078 Jul 9 23:49:17.614131 sshd-session[6482]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:17.617425 systemd[1]: sshd@15-10.200.20.11:22-10.200.16.10:42078.service: Deactivated successfully. Jul 9 23:49:17.618909 systemd[1]: session-18.scope: Deactivated successfully. Jul 9 23:49:17.619578 systemd-logind[1904]: Session 18 logged out. Waiting for processes to exit. Jul 9 23:49:17.620663 systemd-logind[1904]: Removed session 18. Jul 9 23:49:17.703465 systemd[1]: Started sshd@16-10.200.20.11:22-10.200.16.10:42086.service - OpenSSH per-connection server daemon (10.200.16.10:42086). Jul 9 23:49:18.196709 sshd[6496]: Accepted publickey for core from 10.200.16.10 port 42086 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:49:18.198137 sshd-session[6496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:18.202072 systemd-logind[1904]: New session 19 of user core. Jul 9 23:49:18.208304 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 9 23:49:18.648708 sshd[6499]: Connection closed by 10.200.16.10 port 42086 Jul 9 23:49:18.649350 sshd-session[6496]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:18.652113 systemd[1]: sshd@16-10.200.20.11:22-10.200.16.10:42086.service: Deactivated successfully. Jul 9 23:49:18.654042 systemd[1]: session-19.scope: Deactivated successfully. Jul 9 23:49:18.654921 systemd-logind[1904]: Session 19 logged out. Waiting for processes to exit. Jul 9 23:49:18.656690 systemd-logind[1904]: Removed session 19. Jul 9 23:49:18.728441 systemd[1]: Started sshd@17-10.200.20.11:22-10.200.16.10:42088.service - OpenSSH per-connection server daemon (10.200.16.10:42088). Jul 9 23:49:19.205238 sshd[6509]: Accepted publickey for core from 10.200.16.10 port 42088 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:49:19.206245 sshd-session[6509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:19.209989 systemd-logind[1904]: New session 20 of user core. Jul 9 23:49:19.217293 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 9 23:49:20.841756 sshd[6511]: Connection closed by 10.200.16.10 port 42088 Jul 9 23:49:20.842327 sshd-session[6509]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:20.845523 systemd[1]: sshd@17-10.200.20.11:22-10.200.16.10:42088.service: Deactivated successfully. Jul 9 23:49:20.847650 systemd[1]: session-20.scope: Deactivated successfully. Jul 9 23:49:20.848389 systemd[1]: session-20.scope: Consumed 311ms CPU time, 76.7M memory peak. Jul 9 23:49:20.849659 systemd-logind[1904]: Session 20 logged out. Waiting for processes to exit. Jul 9 23:49:20.850896 systemd-logind[1904]: Removed session 20. Jul 9 23:49:20.937257 systemd[1]: Started sshd@18-10.200.20.11:22-10.200.16.10:43298.service - OpenSSH per-connection server daemon (10.200.16.10:43298). Jul 9 23:49:21.436020 sshd[6528]: Accepted publickey for core from 10.200.16.10 port 43298 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:49:21.437048 sshd-session[6528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:21.440976 systemd-logind[1904]: New session 21 of user core. Jul 9 23:49:21.447399 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 9 23:49:21.910400 sshd[6530]: Connection closed by 10.200.16.10 port 43298 Jul 9 23:49:21.910869 sshd-session[6528]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:21.913674 systemd[1]: sshd@18-10.200.20.11:22-10.200.16.10:43298.service: Deactivated successfully. Jul 9 23:49:21.915357 systemd[1]: session-21.scope: Deactivated successfully. Jul 9 23:49:21.916077 systemd-logind[1904]: Session 21 logged out. Waiting for processes to exit. Jul 9 23:49:21.917373 systemd-logind[1904]: Removed session 21. Jul 9 23:49:22.006640 systemd[1]: Started sshd@19-10.200.20.11:22-10.200.16.10:43304.service - OpenSSH per-connection server daemon (10.200.16.10:43304). Jul 9 23:49:22.478630 sshd[6540]: Accepted publickey for core from 10.200.16.10 port 43304 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:49:22.479703 sshd-session[6540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:22.483769 systemd-logind[1904]: New session 22 of user core. Jul 9 23:49:22.488287 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 9 23:49:22.864291 sshd[6542]: Connection closed by 10.200.16.10 port 43304 Jul 9 23:49:22.864706 sshd-session[6540]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:22.868151 systemd[1]: sshd@19-10.200.20.11:22-10.200.16.10:43304.service: Deactivated successfully. Jul 9 23:49:22.870485 systemd[1]: session-22.scope: Deactivated successfully. Jul 9 23:49:22.871593 systemd-logind[1904]: Session 22 logged out. Waiting for processes to exit. Jul 9 23:49:22.872711 systemd-logind[1904]: Removed session 22. Jul 9 23:49:27.958556 systemd[1]: Started sshd@20-10.200.20.11:22-10.200.16.10:43320.service - OpenSSH per-connection server daemon (10.200.16.10:43320). Jul 9 23:49:28.433877 sshd[6557]: Accepted publickey for core from 10.200.16.10 port 43320 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:49:28.434953 sshd-session[6557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:28.438571 systemd-logind[1904]: New session 23 of user core. Jul 9 23:49:28.447300 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 9 23:49:28.814992 sshd[6559]: Connection closed by 10.200.16.10 port 43320 Jul 9 23:49:28.815361 sshd-session[6557]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:28.818406 systemd-logind[1904]: Session 23 logged out. Waiting for processes to exit. Jul 9 23:49:28.818918 systemd[1]: sshd@20-10.200.20.11:22-10.200.16.10:43320.service: Deactivated successfully. Jul 9 23:49:28.820409 systemd[1]: session-23.scope: Deactivated successfully. Jul 9 23:49:28.822032 systemd-logind[1904]: Removed session 23. Jul 9 23:49:33.340082 containerd[1923]: time="2025-07-09T23:49:33.340041216Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf01f1ffdd184bcb113f81e490159b97f048108274db6a34da70f14026c51cc3\" id:\"d2872d7fc00cb0f69e2c5689897cd883db7dcc97c9fd3558faa2bc0345f6e0ea\" pid:6583 exited_at:{seconds:1752104973 nanos:339656467}" Jul 9 23:49:33.904348 systemd[1]: Started sshd@21-10.200.20.11:22-10.200.16.10:49372.service - OpenSSH per-connection server daemon (10.200.16.10:49372). Jul 9 23:49:34.402727 sshd[6594]: Accepted publickey for core from 10.200.16.10 port 49372 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:49:34.403889 sshd-session[6594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:34.408234 systemd-logind[1904]: New session 24 of user core. Jul 9 23:49:34.415320 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 9 23:49:34.801215 sshd[6596]: Connection closed by 10.200.16.10 port 49372 Jul 9 23:49:34.801719 sshd-session[6594]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:34.804691 systemd[1]: sshd@21-10.200.20.11:22-10.200.16.10:49372.service: Deactivated successfully. Jul 9 23:49:34.806430 systemd[1]: session-24.scope: Deactivated successfully. Jul 9 23:49:34.808109 systemd-logind[1904]: Session 24 logged out. Waiting for processes to exit. Jul 9 23:49:34.809383 systemd-logind[1904]: Removed session 24. Jul 9 23:49:38.395993 containerd[1923]: time="2025-07-09T23:49:38.395951830Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6093d98c1d27b24283f7302be0eac3a9ef9cddaad9b310f71005e5d9f66cc2bb\" id:\"affab681ac0e8d830d9cef49bd799f13fe491b42c4a8db8fc5d1d86f443a54cd\" pid:6619 exited_at:{seconds:1752104978 nanos:395717766}" Jul 9 23:49:40.000986 containerd[1923]: time="2025-07-09T23:49:40.000945492Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b54f9d314e6f1c2bd3a3710160e0c3c20704906f83bd2d9ca1eeca664893d466\" id:\"2ec8c0d316d4fef8a67c340b3a4d65f7949aab0dd8071cddf2087d5c80c917e8\" pid:6640 exited_at:{seconds:1752104980 nanos:709164}" Jul 9 23:49:42.957955 systemd[1]: Started sshd@22-10.200.20.11:22-10.200.16.10:45334.service - OpenSSH per-connection server daemon (10.200.16.10:45334). Jul 9 23:49:43.410568 sshd[6653]: Accepted publickey for core from 10.200.16.10 port 45334 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:49:43.411559 sshd-session[6653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:43.415389 systemd-logind[1904]: New session 25 of user core. Jul 9 23:49:43.426299 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 9 23:49:43.779012 sshd[6655]: Connection closed by 10.200.16.10 port 45334 Jul 9 23:49:43.778566 sshd-session[6653]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:43.781420 systemd[1]: sshd@22-10.200.20.11:22-10.200.16.10:45334.service: Deactivated successfully. Jul 9 23:49:43.782832 systemd[1]: session-25.scope: Deactivated successfully. Jul 9 23:49:43.784066 systemd-logind[1904]: Session 25 logged out. Waiting for processes to exit. Jul 9 23:49:43.784927 systemd-logind[1904]: Removed session 25. Jul 9 23:49:48.872941 systemd[1]: Started sshd@23-10.200.20.11:22-10.200.16.10:45344.service - OpenSSH per-connection server daemon (10.200.16.10:45344). Jul 9 23:49:49.366683 sshd[6668]: Accepted publickey for core from 10.200.16.10 port 45344 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:49:49.367439 sshd-session[6668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:49.371299 systemd-logind[1904]: New session 26 of user core. Jul 9 23:49:49.374285 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 9 23:49:49.757606 sshd[6670]: Connection closed by 10.200.16.10 port 45344 Jul 9 23:49:49.758155 sshd-session[6668]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:49.760919 systemd[1]: sshd@23-10.200.20.11:22-10.200.16.10:45344.service: Deactivated successfully. Jul 9 23:49:49.762517 systemd[1]: session-26.scope: Deactivated successfully. Jul 9 23:49:49.763326 systemd-logind[1904]: Session 26 logged out. Waiting for processes to exit. Jul 9 23:49:49.764862 systemd-logind[1904]: Removed session 26. Jul 9 23:49:50.121149 containerd[1923]: time="2025-07-09T23:49:50.120955035Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf01f1ffdd184bcb113f81e490159b97f048108274db6a34da70f14026c51cc3\" id:\"54fa772f8e89b4b97e9d8c22511a32f392ba6d3b0d2494e8f5d9517741885ef8\" pid:6693 exited_at:{seconds:1752104990 nanos:120663170}" Jul 9 23:49:51.830362 update_engine[1907]: I20250709 23:49:51.830307 1907 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 9 23:49:51.830362 update_engine[1907]: I20250709 23:49:51.830354 1907 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 9 23:49:51.830711 update_engine[1907]: I20250709 23:49:51.830554 1907 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 9 23:49:51.830994 update_engine[1907]: I20250709 23:49:51.830850 1907 omaha_request_params.cc:62] Current group set to beta Jul 9 23:49:51.831606 update_engine[1907]: I20250709 23:49:51.831580 1907 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 9 23:49:51.831606 update_engine[1907]: I20250709 23:49:51.831601 1907 update_attempter.cc:643] Scheduling an action processor start. Jul 9 23:49:51.831670 update_engine[1907]: I20250709 23:49:51.831616 1907 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 9 23:49:51.834084 update_engine[1907]: I20250709 23:49:51.833134 1907 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 9 23:49:51.834084 update_engine[1907]: I20250709 23:49:51.833209 1907 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 9 23:49:51.834084 update_engine[1907]: I20250709 23:49:51.833214 1907 omaha_request_action.cc:272] Request: Jul 9 23:49:51.834084 update_engine[1907]: Jul 9 23:49:51.834084 update_engine[1907]: Jul 9 23:49:51.834084 update_engine[1907]: Jul 9 23:49:51.834084 update_engine[1907]: Jul 9 23:49:51.834084 update_engine[1907]: Jul 9 23:49:51.834084 update_engine[1907]: Jul 9 23:49:51.834084 update_engine[1907]: Jul 9 23:49:51.834084 update_engine[1907]: Jul 9 23:49:51.834084 update_engine[1907]: I20250709 23:49:51.833220 1907 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 9 23:49:51.834883 update_engine[1907]: I20250709 23:49:51.834766 1907 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 9 23:49:51.835151 update_engine[1907]: I20250709 23:49:51.835072 1907 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 9 23:49:51.840357 locksmithd[2018]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 9 23:49:52.117819 update_engine[1907]: E20250709 23:49:52.117713 1907 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 9 23:49:52.117819 update_engine[1907]: I20250709 23:49:52.117791 1907 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 9 23:49:54.866381 systemd[1]: Started sshd@24-10.200.20.11:22-10.200.16.10:35390.service - OpenSSH per-connection server daemon (10.200.16.10:35390). Jul 9 23:49:55.355876 sshd[6705]: Accepted publickey for core from 10.200.16.10 port 35390 ssh2: RSA SHA256:zFMRRzzSGWgmvEk8T0W8VsmZJ1v5NiT01j8gkhQ3zko Jul 9 23:49:55.356939 sshd-session[6705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:55.360679 systemd-logind[1904]: New session 27 of user core. Jul 9 23:49:55.366283 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 9 23:49:55.757415 sshd[6707]: Connection closed by 10.200.16.10 port 35390 Jul 9 23:49:55.757339 sshd-session[6705]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:55.760245 systemd[1]: sshd@24-10.200.20.11:22-10.200.16.10:35390.service: Deactivated successfully. Jul 9 23:49:55.761850 systemd[1]: session-27.scope: Deactivated successfully. Jul 9 23:49:55.762507 systemd-logind[1904]: Session 27 logged out. Waiting for processes to exit. Jul 9 23:49:55.763614 systemd-logind[1904]: Removed session 27.